- 20th January 2017
- Posted by: Stephen Downie
- Category: Compute, DC/OS
Since the dawn of computers, systems administrators have agonised about best to utilise the power at their fingertips. To get the best value for money you want your computer running at maximum capacity as much as possible. It may be that your services flex, or there are peak times when your servers are under load. How do you make the most of the other times when the servers are sat idling?
If you services run at roughly the same capacity they are easy to measure, ensure you have enough cores and ram available and consider co-locating services. Similarly if you have some processes that require CPU load but others that are more IO intensive then it might be a great idea to put them on the same box to ensure you max out the resources as much as is sensible. Of course don’t forget you need to leave some headroom for operating system services.
Similarly if you have something that bursts, maybe a reporting database that gets busy at month end, how about putting other scheduled tasks on the box that don’t run at that particular time? You might have cron jobs to generate exports, backups or process static data, run these when the database isn’t doing very much.
In some cases it might be as simple at “apt install serviceX && apt install serviceY”, of course in some others it might not. The way you deploy your services depends on what automation you currently have in place and how much control you want over it.
One option is to use Puppet or Chef to allow you to script your deployments in such a way that you place services onto units and leave it to it. This would allow you the finest grain control over your deployments, but what it you didn’t want that?
Containers are quick to deploy, can be designed with scale in mind and if you don’t like what you’ve done, quick to undo. Again, you could manually place them onto servers, use Puppet or Juju to deploy them and run your services that way.
The other option is using container orchestration, platforms like DC/OS, Mesos, Kubernetes, CoreOS and Swarm are designed to help you run your containers at scale. Instead of worrying where your containers are going, you can let your orchestrator worry about it. Tell DC/OS to deploy your container and it will go and do it with minimal fuss, need it on a computer with added GPU support, not a problem, let it know and it’ll will deploy your container to a machine with GPU… assuming you have the resource available. Let your orchestrator worry about compute density so you don’t have to. It doesn’t have to be a completely automatic process, but deploying into predefined labelled servers without thinking about the “how” can greatly simplify deployments across your organisation.
Making use of the power of container orchestration also allows you to start deploying at scale. In this example we have put a marathon load balancer in front of our web application on 3 different nodes. This of course allows for failure, online updates, A/B testing and so on, whilst keeping your cluster as efficient as possible.