Saturday, March 7, 2015

AWS Autoscaling with F5 Big IP dynamic node add/remove

In continuation of the F5 series (setup F5 BIG-IP HA on AWS EC2, setup a Virtual Server), I am going to share my experience with using AWS Autoscaling with F5 Big-IP.

AWS EC2 has an awesome feature called AutoScaling which one can use to maintain a pool of servers at a desired size or scale the cluster up or down based on CPU utilization, network IO, etc. This gives us the ability to size our cluster according to the traffic resulting in optimal utilization of resources and cost saving.

The tricky part here is to add/remove nodes on F5 as the nodes get added/removed. Since F5 needs to know about the new nodes added removed, we need to integrate AWS Autoscaling with F5.

I used F5 REST API to achieve the same. AWS Autoscaling transmits events about the scaling activities through SNS. All we need to do is listen :)

I have created a micro service which listens for such events and does the needful on F5. I have published my implementation here  .

Please feel free to use it. A simple thank you would suffice if you find it useful and it saves some time for you :) .

Thursday, January 15, 2015

Creating a Virtual Server on F5 BIG IP HA Active/Passive (Active/Standby) on AWS EC2 / VPC

Please visit my previous blog on how to setup an Active/Standby F5 Big IP on AWS here. The blog also covers some basics of F5 terminologies.

Now its time to get your load balancer up and running. You can run multiple instances of load balanced instances on F5. These are called Virtual Servers on F5.

1: Prerequisites:


  1. Make sure that you have your backend servers that you want to load balance ready.
  2. Make sure that the security groups have the required ports open both for F5 as well as backend server subnets.
  3. Make sure that the services you want to load balance on these nodes are running :)

2: Setup Nodes on F5:

  1. Goto Local Traffic > Nodes > Node List. Click Create .
  2. Give the desired Name and Address.
  3. Health Monitors as Node Default.
  4. Click Finished
If the node is reachable you should see the node status as a Blue Box.

3: Create a Pool on F5:
  1. Goto Local Traffic > Pools > Pool List. Click Create.
  2. Enter the desired Name.
  3. Select one of the Health Monitors. This is for checking the health of the backend servers.
  4. Select the Load Balancing Method.
  5. Under New Members. Select Node List and add the desired nodes to the pool.
  6. Click Finished .
  7. If the pool has been setup properly, it should have a green status.
4: Creating the Virtual Server on F5:
  1. Add an additional private IP to the the ENI which in on external VLAN. This becomes your Load Balancer IP.
  2. Goto Local Traffic > Virtual Servers > Virtual Server List. Click Create.
  3. Enter the desired Name and Type.
  4. Source Address is the IP range this VS should accept.To allow access from all IPs enter 0.0.0.0/0.
  5. Destination Address should be the new private IP that we just now created.
  6. Service Port is the port your service is running on.
  7. Source Address Translation should be Auto Map. Please note that if you dont do this your Virtual Server will not work and the request would never reach your backend servers.
  8. From Default Pool select the pool that we had created above.
  9. Select the other settings as desired. For this exercise, leave the other settings as is.
If the VS has been setup properly, it should show green in the status.

Hit the VS IP and voila! your application is load balanced.

To learn how to integrate AutoScaling with F5 go here  

Wednesday, January 14, 2015

Deploying F5 BIG IP HA Active/Passive (Active/Standby) on AWS EC2 / VPC

BIG IP is a big name in the world of Application Delivery Platforms. It is used primarily as a load balancer/interface for hosting a number of applications. It is modular in nature and has a variety of modules like optimized content delivery, application firewall, etc. The full set of features is listed here

F5 a few years back used to be a hardware box only which one had to buy and wire to switches/ machines . They have now come up with a cloud offering for the same and its called BIG-IP VE (VE stands for virtual edition). One can now chose to either run their hardware or run the VE on cloud.

We had to set F5 VE for one of our customers on AWS. Coming from a non networking/non physical server background, it was difficult for us to understand the F5 networking terminology and map it to AWS which as we all know is completely abstracted.

There is one documentation provided by F5 on how to host F5 on EC2 and its pretty good. Its available here. But the sad part is it assumes one understands F5 completely and is best for people who have hands on experience with running F5 hardware boxes. I followed the same and was able to set up the F5 but with some gotchas which I would like to share with you in this article. I am also going to brief you about the basics of F5 and how it works.

Some terms that one should know:

VLAN (Virtual Lan):

 We all understand what LAN is. Virtual Lan is used to create further sub sections of the LAN. For eg in case of a SWITCH all the ports on it constitute a single broadcast domain. So if one machine sends out a broadcast message it would be placed on all the ports of the switch. This leads to a lot of unnecessary traffic.

Since a SWITCH is a layer 2 device and is not aware of the NETWORK layer all the ports are part of the same network. Suppose we have a very big network where in there are 1000 machines on the same network connected via a SWITCH. What if I want to segregate this network further, for eg: if I want to create three groups like SALES, MARKETING, DEVELOPMENT. I want to avoid cross group traffic which is unavoidable in case of SWITCH as its not aware of the logical subnets (if I create one for each which is possible but not recommended). So if a machine in SALES is looking for another machine within that group, it would send out an ARP request which would be received by all the machines on the switch and not just the SALES subnet. This causes a lot of unnecessary traffic.

To avoid this some switches come with a facility to create virtual lans. It allows us to group ports (phyical switch ports) together into a virtual network. So now we can sat that port 1,2,3 belong to VLAN A and ports 4,5,6 belong to VLAN B. So now there would not be a single broadcast domain and if an ARP request is sent by a machine in VLAN A it would stay within that VLAN (ports to be precise). Now we can have different subnets for each VLAN and these subnets would only be able to talk to each other through a router. This is usually achieved by adding tags to the ports.

This way we can reduce a lot of unnecessary traffic by limiting our broadcast domain to a smaller section.

AWS does not support VLAN. So for us a VPC subnet is as good as a VLAN and can be used as such but nothing stops us from creating a pueudo VLAN which is smaller than a subnet.

Virtual Server:

Virtual Server in F5 is equivalent to an ELB. In ELB we get a Domain Name and not an IP but with F5 we get an IP. A single F5 box can run multiple such Load Balanced endpoints. A single F5 box can be used for all reverse proxy requirements in a VPC. As the name implies, its a logical server and not an actual one, identified by an IP (EIP or private IP). Every Virtual Server has a pool of servers which it load balances. This is similar to the instances on an ELB. Since multiple private IPs can be attached to a single ENI, the number of VS that we can run on an F5 is limited by the number of ENIs that an instance can have.

Self IP:

An F5 box can be part of multiple VLANs. Think of Self IP as the IP F5 box uses to recognize itself, as a single ENI could have multiple private IPs attached to it which may be used by VS or some other thing. This IP is static in nature and does not migrate in case of failover.

Floating IP:

For an HA setup we need the VLANs too to migrate from one box to the other. This is achieved by assigning a floating IP to each VLAN. This IP migrates from one F5 box to the other in case of failover. This IP movement happens through reassigning of this private IP from box A to box B through AWS API calls.

Traffic Group: 

In case of a HA setup, the entity that moves from one box to the other is the Traffic Group. All the floating IPs, VS ips are a part of this. We can force the movement of the traffic group manually too through the console.

Now lets get to the actual setup of a HA cluster:

1: Prerequisites:

  1. AWS account with a VPC with atleast three subnets. For this setup lets create a VPC with CIDR 10.0.0.0/16 and three subnets 10.0.0.0/24 (management), 10.0.1.0/24 (external), 10.0.2.0/24 (internal).
  2. Two Security Groups as mentioned here
2: Launch Box A:
  1. Go here . Select the one which suits you.
  2. For subnet, select the management subnet and assign a private IP (example 10.0.0.2). Add two more Network Interfaces one each from external and internal subnet and assign one private Ip (example 10.0.1.2 and 10.0.2.2).
  3. For security group select allow-all-traffic .
  4. Once the machine is launched assign an EIP to the management ENI. This is done so that the management port is accessible over the internet for configuration.
3: Setting up the admin password:
  1. Log in to the new AMI that you just launched. Use the name of the key pair (.pem file), and the elastic IP address of your EC2 instance. $ ssh -i <username>-aws-keypair.pem root@<elastic IP address of EC2 instance>.
  2. At the command prompt, type tmsh modify auth password admin.
  3. To ensure that the system retains the password change, type tmsh save sys config, and then press Enter.
4: VLAN setup:
  1. Login at https:<EIP>. Enter the admin username/password that we created in the last step.
  2. A setup wizard would come up. Complete first 2-3 steps (license activation) then quit the wizard. Dont finish the rest of the steps as we would be doing those manually.
  3. Go to Network > VLAN > VLAN List . Click Create .
  4. Enter name internal.
  5. Select 1.2 for interface, Tagging Untagged. Click the Add button.
  6. Click Finished.
  7. Repeat the same steps as above to create another VLAN by the name external. For interface select 1.1. 
5: Self IP setup:
  1. Goto Network > Self IPs. Click Create
  2. Put Name as self_ip_external. IP Address 10.0.1.2. Netmask as 255.255.255.0. VLAN as external. Port lockdown Allow All. Select the Default Traffic Group.
  3. Do the same for the internal VLAN.
  4. Click Finished.
6: Setup AWS Credentials: Enter AWS credentials under System > Configuration > AWS.

7: Getting ready for HA setup:
  1. Goto Device Management > Devices > Device Connectivity > Config Sync. Select the external VLAN IP.
  2. Goto Device Management > Devices > Device Connectivity > Failover Network. Click Add under Failover Unicast Configuration. Use the management (10.0.0.2) IP here.
8: Setup the Box B : Follow all the above steps to setup the other box. Needless to say, the IPs would be different for this box :) . 

9: HA cluster setup:
  1. In Box A goto Device Management > Device Trust > Peer List. Click Add. Use the management IP of Box B and admin username/password. Follow the rest of the steps
  2. Now both the boxes are paired.
  3. Goto Device Management > Device Groups . Click Create
  4. Put any name to identify the device group which will participate in failover cluster.
  5. Group Type is Sync-Failover.
  6. Drag both IPs from right to left.
  7. Select Full Sync and Network Failover
  8. You may have have to sync the config once to the Box B. goto Device Management > Overview and sync Box A to the group once.
  9. You HA cluster Setup is done. One box would show ACTIVE and the other one STANDBY.
10: Creating Floating IPs:
  1. This has to be done ONLY on Box A.
  2. Add one more secondary IP to the 10.0.1.0/24 and 10.0.2.0/24 subnet ENI one of the boxes through AWS console.
  3. Go to Network > Self IPs. Click Create
  4. Enter the name as self_ip_floating_internal for internal VLAN. Select the same values as before (with new IP that we created above). Select traffic-group-1 (floating) for Traffic Group.
  5. Similarly do the same for external VLAN.
Now we have the HA setup ready. To test the movement of the VLAN floating IPs do the force failover and observe in the AWS console. The private IPs (floating) move from one box to the other.

Any Virtual Server that we create would have their IPs as part of this default floating traffic group. This group and its failover objects (like Virtuals Servers and IPs) can be seen under Device Management > Traffic Groups > Failover Objects.


To learn more about creating a Virtual Server go here.
To learn how to integrate AutoScaling with F5 go here