Howto – Remote tcpdump capture with Wireshark on Windows

For diagnostic purposes, it migt be sometimes necessary to perform a remote capture of network traffic on some linux box.   It comes in handy that we can do this remotely from a laptop running windows and wireshark, this way we don’t need to, first create a packet capture file and transfer this to our computer.  Instead, this procedure connects over ssh to the remote linux, starts tcpdump, redirects the output in realtime over the ssh connection to our windows machine and inputs this into wireshark.

The tools we are using for this on Windows is plink.exe (known from the putty suite of tools), tcpdump and Wireshark.

Before we start monitoring, we will need to give tcpdump permission to capture raw packets:

By logging in with username and password:

  • “C:\Program Files (x86)\PuTTY\plink.exe” -ssh -t -pw password user@my.remotelinuxbox.com sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump

By logging in with a key:

  • “C:\Program Files (x86)\PuTTY\plink.exe” -ssh -t -i “S:\mykey.pub” user@my.remotelinuxbox.com sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump

To monitor your remote Linux box, connecting with a username and password:

  • “C:\Program Files (x86)\PuTTY\plink.exe” -ssh -pw password user@my.remotelinuxbox.com tcpdump -n -nn -s 0 -U -w – -i eth0 | “C:\Program Files\Wireshark\wireshark.exe” -i – -k

To monitor your remote Linux box, connecting with a key:

  • “C:\Program Files (x86)\PuTTY\plink.exe” -ssh -i “S:\mykey.pub” user@my.remotelinuxbox.com tcpdump -n -nn -s 0 -U -w – -i eth0 | “C:\Program Files\Wireshark\wireshark.exe” -i – -k
Advertisement

Howto – Capture with tcpdump for analysis with Wireshark on Linux

Sniffing on the linux machine

Capturing a sniffer dump on a linux machine is easy, we can install the tcpdump package to capture network packets and write these to a file for further analysis with wireshark.

  • apt-get install tcpdump
  • tcpdump -i <interface> -s 65535 -w <some-file>

You can transfer the file thereafter with WinSCP to your Windows station for analysis.

Sniffing on the linux machine with redirection to your Windows wireshark

If you have putty and plink installed, you can also capture directly on a remote linux machine and redirect this to your windows station’s wireshark for realtime analysis.

If you are using password authentication on the linux machine:

  • “C:\Program Files (x86)\PuTTY\plink.exe” -ssh -pw password root@somemachine.localdomain tcpdump -n -nn -s 0 -U -w – -i bfe0 vlan 99 and icmp | “C:\Program Files\Wireshark\wireshark.exe” -i – -k

Or if you’re using key based authentication:

  • “C:\Program Files (x86)\PuTTY\plink.exe” -ssh -i “C:\mykeystore\somekey.ppk” root@somemachine.localdomain tcpdump -n -nn -s 0 -U -w – -i bfe0 vlan 99 and icmp | “C:\Program Files\Wireshark\wireshark.exe” -i – -k

Howto – Install and Connect to Exchange online remote powershell with 2FA

  1. Open Internet Explorer browser then connect https://portal.office.com/​
  2. Click Admin in the app lancher
  3. Go to Admin center in the left side menu and Click Exchange
  4. Now Exchange admin center is opened ,then click Hyprid and click Configure buttom in below of Exchange Online Powershell module supports

5. After clicking Configure button it’s automatically launch application using web


Note:
if you are using other browsers than internet explorer, you will get “Application can’t be started” error message.

6. The Application install window will open, then click Install

7. Now Exchange online powershell module with MFA is installed in your machine.

How to connect Exchange online Powershell with MFA

After the installation, Microsoft Exchange Online Remote PowerShell Module shortcut will created in desktop

1. Open the Microsoft Exchange Online Remote PowerShell Module.

2. Use following command to connect

4. In the call verification window that opens, Answer the verification call in your phone.

5. Now Multi Factor Authentication is verified, and you can able to access Exchange powershell.​

Howto – Install and Connect to Office 365 remote powershell with 2FA

Step 1: Install required software

These steps are required once on your computer, not every time you connect. However, you’ll likely need to install newer versions of the software periodically.

  1. Open an elevated Windows PowerShell command prompt (run Windows PowerShell as an administrator).
  2. In the Administrator: Windows PowerShell command window, run this command:
  • Install-Module -Name AzureAD -AllowClobber

Step 2: Connect to your Office 365 subscription

To connect with just an account name and password :

  • $UserCredential = Get-Credential
  • Connect-AzureAD -Credential $UserCredential

To connect with multi-factor authentication (MFA) :

  • Connect-AzureAD

After connecting, you can use the new cmdlets for the 
Azure Active Directory V2 PowerShell module.

Howto – Convert a VMDK from Thick to Thin provisioning on ESXi Server

Abstract:

This article gives the steps to convert a thick provisioned virtual hard drive VMDK to thin provisioned on ESX 4 or newer.

Option 1

  1. Migrate the VM Guest to another datastore and change the advanced configuration of the hard drive to thin provisioned.

Option 2

If you can’t vMotion of migrate the VM to another datastore you can use the this the vmkfstool command from the VM host to convert it.

  1. Optional
    1. Login to the VM Guest
    2. Make sure the VMTools are updated
    3. Run an OS Defrag of the drives
    4. Use Microsoft SDelete
      1. http://technet.microsoft.com/en-us/sysinternals/bb897443
      2. Download and install on the VM Guest OS
      3. Run sdelete -c
  2. Power Off the VM Guest
  3. Login to the VM Host that has the VM Guest either by Power CLI or SSH
    1. You may need to enable SSH and ESX Shell to do this
  4. Browse to the VM Guest file location
    1. Example: cd /vmfs/volumes/mydatastore/vmname/
  5. Clone original disk to a thin copy
    1. vmkfstools -i SERVERNAME.vmdk -d thin thin-SERVERNAME.vmdk
  6. Rename the original disk
    1. vmkfstools -E SERVERNAME.vmdk orig-SERVERNAME.vmdk
  7. Rename the thin disk to the original disk name
    1. vmkfstools -E thin-SERVERNAME.vmdk SERVERNAME.vmdk
  8. Remove VM Guest from VM Host inventory
  9. Add the VM Guest back to the VM Host inventory
    1. This so the VM Host registers that it’s now a thin provisioned disk/s
    2. This may not be needed for vSphere 5
  10. Verify that the VM Guest boots and runs correctly
  11. Remove original Thick disk/s
    1. ​vmkfstools -U orig-SERVERNAME.vmdk​

Howto – Install and Connect to SharePoint online remote powershell with 2FA

Before you get started using PowerShell to manage SharePoint Online, make sure that the SharePoint Online Management Shell is installed.

The SharePoint Online Management Shell snap in can be downloaded from this location:

https://www.microsoft.com/en-us/download/details.aspx?id=35588

To connect with a user name and password

  1. Fill in the values for the $adminUPN and $orgName variables (replacing all the text between the quotes, including the < and > characters), and then run the following commands at the SharePoint Online Management Shell command prompt:$adminUPN=”<the full email address of a SharePoint administrator account, example: jdoe@contosotoycompany.onmicrosoft.com>” $orgName=”<name of your Office 365 organization, example: contosotoycompany>” $userCredential = Get-Credential -UserName $adminUPN -Message “Type the password.” Connect-SPOService -Url https://$orgName-admin.sharepoint.com -Credential $userCredential
  2. When prompted with the Windows PowerShell credential request dialog box, type the password for the SharePoint Online SharePoint administrator account.

To connect with multifactor authentication (MFA)

  1. Fill in the value for the $orgName variable (replacing all the text between the quotes, including the < and > characters), and then run the following commands at the SharePoint Online Management Shell command prompt:$orgName=”<name of your Office 365 organization, example: contosotoycompany>” Connect-SPOService -Url https://$orgName-admin.sharepoint.com
  2. When prompted with the Microsoft SharePoint Online Management Shell dialog box, type the account name and password for a SharePoint administrator account, and then click Sign in.
  3. Follow the instructions in the Microsoft SharePoint Online Management Shell dialog box to provide the additional authentication information, such as a verification code, and then click Sign in.

You are now ready to begin executing SharePoint Online commands.

Reference: https://support.office.com/nl-nl/article/verbinding-maken-tussen-powershell-en-office-365-services-06a743bb-ceb6-49a9-a61d-db4ffdf54fa6

Howto – Install and configure the snmpd daemon under linux

Installing the SNMPD daemon

Before we will be able to use SNMP we need to download and install the snmpd package. Update your apt cache and install the required binaries:

sudo apt-get update
sudo apt-get install -y snmp snmpd

Configure the SNMP daemon

Make the daemon listen on all IP addresses

By default the snmpd daemon daemon listens only to connections from the local host, we need to change the agentAddress parameter so the SNMP daemon will listen on all IP addresses.

For Ubuntu 10.04.4 LTS:

vi /etc/default/snmpd
# remove 127.0.0.1 from snmpdopts

For other distributions:

# Listen for connections from the local system only
# agentAddress udp:127.0.0.1:161 <-- put this entry in comment
# Listen for connections on all interfaces (both IPv4 and IPv6)
agentAddress udp:161,udp6:[::1]:161 <-- add this entry to make it listen to all IP addresses

Restart your daemon for the changes to take effect.

service snmpd restart

Create an authentication user for SNMPv3:

In our set-up we will be utilising SNMPv3 with an encrypted connection and user authentication.
To facilitate in this we will need to create a SNMP user account and password to validate our connection to the SNMP server.

automated

service snmpd stop
net-snmp-config --create-snmpv3-user -ro -A password -X password -a MD5 -x AES snmpv3user

service snmpd start

or manually

service snmpd stop
echo 'createUser snmpv3user MD5 "password" AES' | sudo tee -a /var/lib/snmp/snmpd.conf

echo 'rouser snmpv3user' | sudo tee -a /usr/share/snmp/snmpd.conf
service snmpd start

Testing the SNMP daemon

Let’s test our snmp daemon with snmpwalk, if everything is ok, we should be able to walk the snmp tree.

snmpwalk -v3 -a MD5 -x AES localhost -u snmpv3user -A password


If it doesn’t work when trying this from another station, it might be possible that the iptables firewall is blocking the connection to your snmp daemon.

Allow access from any ip address:

iptables -I INPUT -p udp -m udp --dport 161 -j ACCEPT
iptables -I INPUT -p udp -m udp --dport 162 -j ACCEPT
iptables-save > /etc/iptables.up.rules

Allow access from source ip address:

iptables -I INPUT -s my.source.ip.addr/32 -p udp -m udp --dport 161 -j ACCEPT

iptables -I INPUT -s
my.source.ip.addr /32 -p udp -m udp --dport 162 -j ACCEPT
iptables-save > /etc/iptables.up.rules

Extra steps on debian

Apparently snmpd is not working by default on debian systems.
On debian systems we need to install the snmp mibs, these do not come bundled with the snmpd package.
We can download the mibs with a package called: snmp-mibs-downloader
But before we can install this downloader, we need activate our non-free repositories for apt.

echo "deb http://ftp.us.debian.org/debian/ jessie main non-free" >>  /etc/apt/sources.list 

echo "deb-src http://ftp.us.debian.org/debian/ jessie main non-free" >> /etc/apt/sources.list

apt-get update

Now install the snmp-mibs-downloader:

apt-get install snmp-mibs-downloader

Now install the required mibs:

snmp-mibs-downloader

Howto – Implement policy routing on Linux

Problem Description

You have built two or more network cards into one Linux system and each of these cards has its own default gateway. By default, you can only have one default gateway on a system. The case described would lead to asynchronous routing, whereby the router would reject the packets as appropriate.

Solution

The iproute2 program, which is included in all current Linux distributions and already installed even, as a rule, can be used for the solution of this problem. Normally, a Linux system only has one routing table, in which only one default gateway can make entries. With iproute2, you have the ability to setup an additional routing table, for one thing, and allow this table to be used by the system based on rules, for another.

Initial Position

We will assume that we have two interfaces: eth0 and eth1. The two networks that should be used are 192.168.0.0/24 and 10.10.0.0/24, whereby the first IP address in each respective network should be the gateway. Under Debian, the initial configuration would appear as follows. /etc/network/interfaces

$ sudo vi /etc/network/interfaces
# This file describes the network interfaces available on your system 
# and how to activate them. For more information, see interfaces(5). 
# The loopback network interface auto lo iface lo inet loopback
# The primary network interface 

allow-hotplug eth0 iface eth0 inet static     
     address 192.168.0.10     
     netmask 255.255.255.0     
     gateway 192.168.0.1 

# The secondary network interface allow-hotplug eth1 iface eth1 inet static     
     address 10.10.0.10     netmask 255.255.255.0 

Adding a Second Routing Table

To add a new routing table, the file, /etc/iproute2/rt_tables must be edited. We will call the routing table “rt2” and set its preference to 1.

$ sudo vi /etc/iproute2/rt_tables
#
# reserved values
#
255     local
254     main
253     default
0       unspec
#
# local
#
#1      inr.ruhep
rt2

Configuring the New Routing Table

From this point, four commands are needed to achieve our goal. First, the new routing table needs to be populated, which is done using the following command.

$ sudo ip route add 10.10.0.0/24 dev eth1 src 10.10.0.10 table rt2
$ sudo ip route add default via 10.10.0.1 dev eth1 table rt2

The first command says that the network, 10.10.0.0/24, can be reached through the eth1 interface. The second command sets the default gateway.

Routing Rules

So that the system knows when to use our new routing table, two rules must be configured.

$ ip rule add from 10.10.0.10/32 table rt2 ip rule add to 10.10.0.10/32 table rt2 

These rules say that both traffic from the IP address, 10.10.0.10, as well as traffic directed to or through this IP address, should use the rt2 routing table.

Making the Configuration permanent

The ip rule and ip route commands will become invalid after a re-boot, for which reason they should become part of a script (for example, /etc/rc.local) that will be executed once the network has been started after booting. For Debian, these command can also be written directly into the /etc/network/interfaces file, which would then appear as follows.

$ sudo vi /etc/network/interfaces
iface eth1 inet static
	address 10.10.0.10 
	netmask 255.255.255.0
	post-up ip route add 10.10.0.0/24 dev eth1 src 10.10.0.10 table rt2
	post-up ip route add default via 10.10.0.1 dev eth1 table rt2
	post-up ip rule add from 10.10.0.10/32 table rt2
	post-up ip rule add to 10.10.0.10/32 table rt2

More than Two Network Cards or Gateways

If there are more than two networks, a routing table can be created for each additional network analogous to the example presented above.

Testing the Configuration

The following commands can be used to ensure that the rules as well as the routing entries are working as expected.

$ sudo ip route list table rt2 
$ sudo ip rule show

Howto – Configure Windows Time Service

Apropos:

Configuring windows time in an active directory domain from zero to hero.

In a domain environment the master clock is hosted on the domain controller which holds the PDC emulator role.  This domain controller synchronizes it’s time from an external time source

All other domain controllers and workstations utilize the domain hierarchy to locate their time source.

Resetting and restoring the entire domain time synchronisation hierarchy consists of the following steps:

1. Reset the windows time service on the domain controller that holds the PDC emulator and reconfigure the pdc emulator with an external time source.

2. Reset the time service on any domain member server and reconfigure them to follow the domain hierarchy

3. If necessary, reset any workstations and reconfigure them to follow the domain hierarchy.

NOTE: It’s important to note, if your domain controllers are hosted on virtual infrastructure, it’s necessary to disable part of the virtual time provider, failing to do so will render your domain time synchronisation invalid.

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0 

1. Reset the windows time service on the domain controller that holds the PDC emulator and reconfigure the pdc emulator with an external time source.

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0
net stop w32Time
w32tm /unregister
w32tm /register
net start w32time
W32tm /config /manualpeerlist:pool.ntp.org,0x1 /syncfromflags:manual /reliable:yes /update
net stop w32time & net start w32time & W32tm /resync /rediscover

2. Reset the time service on any domain member server and reconfigure them to follow the domain hierarchy

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0
net stop w32Time
w32tm /unregister
w32tm /register
net start w32time
w32tm /config /syncfromflags:domhier /reliable:yes /update
net stop w32time & net start w32time & W32tm /resync /rediscover

3. If necessary, reset any workstations and reconfigure them to follow the domain hierarchy.

net stop w32Time
w32tm /unregister
w32tm /register
net start w32time
w32tm /config /syncfromflags:domhier /update
net stop w32time & net start w32time & W32tm /resync /rediscover

How to check the time service current status:

w32tm /query /status /verbose 

How to force a time sync:

w32tm /resync /force 

How to verify a specific time server:

w32tm /monitor /computers:10.1.1.101 

How to compare the time of the local computer with another time source:

w32tm /stripchart /computer:10.1.1.1 /samples:1 /dataonly 

Howto – Set up a highly available instance of Rancher

Rancher is an open source software platform that enables organizations to run and manage Docker and Kubernetes in production. With Rancher, organizations no longer have to build a container services platform from scratch using a distinct set of open source technologies. Rancher supplies the entire software stack needed to manage containers in production.

Abstract:

In this guide we will be setting up a highly available instance of Rancher. The whole setup consists of four Debian Stretch servers, three on which docker is installed, and a fourth one which will perform the load balancing across the three Rancher servers. The whole rancher installation will be deployed containerized, and haproxy will serve as the load balancer.

Set-Up docker on Ubuntu:

Update the apt package index:

$ sudo apt-get update 

Install packages to allow apt to use a repository over HTTPS:

$ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

$sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

$ sudo apt-get update

$ sudo apt-get install docker-ce=17.03.3~ce-0~ubuntu-xenial

Install Prerequisites:

Install Kubectl:

$ sudo apt-get update && sudo apt-get install -y apt-transport-https
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo $ tee -a /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubectl

Install rke:

$ wget https://github.com/rancher/rke/releases/download/v0.1.11/rke_linux-amd64
$ mv rke_linux-amd64 rke
$ mv rke /usr/bin/
$ chmod +x /usr/bin/rke
$ rke --version

Configure single sign on:

(needs to be done on the first server only)

under the adm account:

$ ssh-keygen (no passphrase)
$ ssh-copy-id adm@rancher001.exampledomain.com
$ ssh-copy-id adm@rancher002.exampledomain.com
$ ssh-copy-id adm@rancher003.exampledomain.com

test single sign on:

$ ssh adm@rancher001.exampledomain.com
$ ssh adm@rancher002.exampledomain.com
$ ssh adm@rancher003.exampledomain.com

Create yml deployment file for our rancher cluster

under the adm user account:

$ cat < /home/adm/rancher-cluster.yml
nodes:
address: rancher001.exampledomain.com user: adm role: [controlplane,worker,etcd]
address: rancher002.exampledomain.com user: adm role: [controlplane,worker,etcd]
address: rancher003.exampledomain.com user: adm role: [controlplane,worker,etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
EOF

add the adm account to the docker group (on all three hosts)

$ sudo usermod -aG docker adm 

(now first close your ssh session and log back in to apply the group membership change)

Install the kubernetes cluster:

$ sudo rke up --config ./rancher-cluster.yml 

The installer will have created a credential file for kubectl, need to copy this in the right place:

$ sudo chown adm:adm ~/kube_config_rancher-cluster.yml
$ mkdir ~/.kube
$ cp kube_config_rancher-cluster.yml /home/adm/.kube/config

Test kubectl:

$ kubectl get all 

install helm & tiller

$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
$ tar -xvf helm-v2.11.0-linux-amd64.tar.gz
$ cd linux-amd64/
$ sudo cp helm /usr/local/bin
$ sudo cp tiller /usr/local/bin
$ sudo chmod 755 /usr/local/bin/helm
$ sudo chmod 755 /usr/local/bin/tiller
$ kubectl -n kube-system create serviceaccount tiller
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

Test your tiller installation:

$ kubectl -n kube-system rollout status deploy/tiller-deploy 

And run the following command to validate Helm can talk to the tiller service:

$ helm version 

Install Rancher

Add the rancher stable repository to helm package manager:

$ helm repo add rancher-stable  https://releases.rancher.com/server-charts/stable 

Install rancher:

helm install rancher-stable/rancher --name rancher --namespace cattle-system --set hostname=rancherprov.exampledomain.com --set ingress.tls.source=secret --set privateCA=true 

Now create and sign a server certificate for our installation:

$ openssl genrsa -out ./rancherprov.exampledomain.com.key 2048
$ openssl req -new -sha256 -key ./rancherprov.exampledomain.com.key -out ./rancherprov.exampledomain.com.csr
BE
SomeLocation
SomeTown
SomeCompany
IT
rancherprov.exampledomain.com
support@exampledomain.com
(no challenge password or optional company name, just hit enter)
cat rancherprov.exampledomain.com.csr

Sign this certificate signing request on our own certificate authority:

In this case the request was signed on a Microsoft Certificate Authority.

Open a browser and navigate to your certificate authority: https://certificates.exampledomain.com/certsrv

  • request a certificate
  • advanced certificate request
  • paste the certificate signing request
  • select webserver template
  • paste in attributes: san:dns=rancherprov.exampledomain.com
  • submit
  • select base64
  • select download certificate

Paste the contents from the certificate signing request file:
rancherprov.exampledomain.com.crt in the signature box 

Add the certificate and key to the kubernetes cluster:

$ kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=rancherprov.exampledomain.com.crt --key=rancherprov.exampledomain.com.key 

Copy the certificate of our internal certificate authority to a file: cacerts.pem

Now install our certificate authority certificate in the kubernetes cluster:

$ kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem 

Upgrade Rancher

$ helm upgrade rancher rancher-stable /rancher \
  --set hostname=rancherprov.exampledomain.com \
  --set ingress.tls.source=secret \
  --set privateCA=true

Install a load balancer to load balance the three rancher servers:

Installing HAProxy

It’s easy:

$ sudo apt install -y haproxy 

Configuring HAProxy

The haproxy configuration file can be found here: /etc/haproxy/haproxy.cfg

$ sudo vi /etc/haproxy/haproxy.cfg 

After modifying the configuration file, it’s required to restart the HAProxy service

$ sudo systemctl restart haproxy 

Example configuration:

This example will load balance http and https connections to three different backend servers.   There is no ssl offloading in this exaple by haproxy, rather, the ssl connections are using tcp load balancing with a sticky table.

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ # An alternative list with additional directives can be obtained from # https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend rancher_http
bind *:80
mode http
default_backend rancher_http_backendnodes
frontend rancher_https
bind *:443
mode tcp
default_backend rancher_https_backendnodes
backend rancher_http_backendnodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:\ rancher.exampledomain.com
server node1 10.1.1.160:80 check
server node2 10.1.1.161:80 check
server node3 10.1.1.162:80 check
backend rancher_https_backendnodes
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
default-server inter 1s
server node1 10.1.1.160:443 check
server node2 10.1.1.161:443 check
server node3 10.1.1.162:443 check
listen stats
bind :32700
stats enable
stats uri /
stats hide-version
stats auth admin:mypassword