To enable haproxy/keepalived failovers on a virtual machine, some kernel configuration tuning is needed. The settings on the VM host and client need to be adapted from a real physical server. The case I will describe below is built upon a simple bridged setup. At the time of install there where plenty of issues running openvswitch on ubuntu. The package wasn’t stable. I haven’t looked back yet to see if the problem still exists since then but like one says, don’t catch a falling knife and don’t fix what isn’t broken. It works fine like this.

Let take a look at some configs. He’re is the relevant part of the config from the Virtual machine itself. We are using keepalived to arrange the floating IP on a bridged VM network with the host. All IP’s are in the private space realm.

net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_syncookies = 1

On the VM host you’ll need to set these

net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.eth1.proxy_arp = 1
net.ipv4.conf.eth0.proxy_arp = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Adjust this for your own interface labels. If you set these, bridged traffic between VM HOST and CLIENT will not be subjected to the firewall and you have no arp issues. In case you want to see if keepalived instances are talking to eachother, just use tcpdump:

tcpdump ether multicast
13:01:17.365565 IP > VRRPv2, Advertisement, vrid 51, prio 101, authtype none, intvl 1s, length 20
13:01:17.366613 IP > VRRPv2, Advertisement, vrid 52, prio 101, authtype none, intvl 1s, length 20
13:01:17.367712 IP > VRRPv2, Advertisement, vrid 53, prio 101, authtype none, intvl 1s, length 20
13:01:17.368760 IP > VRRPv2, Advertisement, vrid 54, prio 101, authtype none, intvl 1s, length 20

That’s it. When you set up 2 of these successfully, you created yourself a ‘cloud load balancer’ .