Sunday, January 24, 2016
Until recently, a state-of-the-art DNS platform could do two things with regards to traffic management: first it wouldn’t send users to a server that was down, and second it would try to return the IP address of the server that’s closest to the end user making the request.
This is a bit like using a GPS unit from 1999 to get to a gas station: it can give you the location of one that’s close by and maybe open according to its Yellow Pages listing, but that’s about it. Maybe there is roadwork or congestion on the one route you can take to get there. Maybe the gas station is out of diesel, or perhaps they’re open but backed up with lines stretching down the block. Perhaps a gas station that’s a bit farther away would have been a better choice?
Internet properties face similar challenges in digital form, and they go far beyond proximity and a binary notion of “up/down.” Does the datacenter have excess capacity? What’s traffic like getting there? Are there any data privacy or protection protocols we need to take into account?
Today’s data-driven application delivery models require a new way of managing DNS traffic. Next-gen DNS platforms have been built from the ground up with traffic management at their core, bringing to market exciting capabilities and innovative new tools that allow businesses to enact traffic management in ways that were previously impossible.
Here are five best practices to consider when implementing an advanced, intelligent traffic management platform:
- Intelligent routing: Look for solutions that route users based on their ISP, ASN, IP prefix or geographical location. Geofencing can ensure users in the EU are only serviced by EU datacenters, for instance, while ASN fencing can make sure all users on China Telecom are served by Chinacache. Using IP fencing will make sure local-printer.company.com automatically returns the IP of your local printer, regardless of which office an employee is visiting.
- Leverage load shedding to prevent meltdowns: Automatically adjusting the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications, can help prevent overloading a datacenter without taking it offline entirely, and seamlessly route users to the next nearest datacenter with excess capacity.
- Enact business rules: Meet your applications’ needs with filters that use weights, priorities and even stickiness by enacting business rules. Distribute traffic in accordance with commits and capacity. Combine weighted load balancing with sticky sessions (e.g. session affinity) to adjust the ratio of traffic distributed among a group of servers while ensuring that returning users continue to be directed to the same endpoint.
- Route around problems: Identify solutions that provide the ability to constantly monitor endpoints from the vantage point of the end user and then send those coming from each network to the endpoint that will service them best.
- Cloud burst: Leverage ready-to-scale infrastructure to handle planned or unplanned traffic spikes. If your primary colocation environment is becoming overloaded, make sure you're are able to dynamically send new traffic to another environment according to your business rules, whether it’s AWS, the next nearest facility or a DR/failover site.