- 24th February 2016
- Posted by: Stephen Downie
- Category: DevOps, JuJu
As platforms scale the need to develop systems to cope with the volume increases rapidly and the thinking behind how these platforms are developed needs to change to cope with the ever increasing flexibility and extendability that these systems require.
Take a look at a standard Hadoop distribution, whilst the demo’s come in standalone VM’s deploying the whole Hadoop stack on a single node is a configuration management nightmare and doesn’t lend itself to scaling at all. This is where application modelling comes into play. Instead of thinking about servers and services, we think about the application how we’d like to model that and how we’d like to see the final solution deployed for users. This abstraction seems minor but it is important, instead of spinning up nodes, configuring them, deploying the software, setting up the fire walls, and checking they are all up and running, we can now use platforms like Juju make life a lot easier.
Of course one way around this is to migrate to a SAAS platform, as Mark Shuttleworth proposed in Gent, SAAS is really “proprietary operations”. Someone still has to run the systems for you and all the other users, but you just pay another company for that luxury. At Spicule we believe in open source, we believe in user friendliness but most importantly we believe in empowering the end users not offloading the ops to someone else, we like companies to be in charge of their own destiny.
We don’t want lots of proprietary code or big command line scripts that confuse end users. We want open code that other people can reuse, validate and spot errors in (yes we do occasionally write software with bugs), we want web interfaces that administrators can interact with from around the globe. This is where the beauty of Juju comes in.
When we spin up a new platform, we no longer have to stand up a server, configure the server and install the software. When we use Juju we deal in models, not in servers. A Juju model is “stuff connected to other stuff” (thanks again to Mark Shuttleworth) and this idea helps explain why Juju makes life so much easier for scalable deployments. For example, I want to deploy a Master Slave replicated MySQL cluster of 4 nodes, and on top of that I need a WordPress website all fronted by HA proxy. Using Juju modelling this is easy, and even, visual so I can easily show the services to people with a vested interest in the platform.
In the example above you can clearly see what is deployed and the relations that have been formed between the different services. The relations can trigger actions and make things happen that automate the connections and disconnections between different services.
Of course over time the load and data volume will increase on my cluster, so I need more horsepower. With Juju this isn’t a problem, at the click of a button I can deploy an unlimited number of new MySQL slaves to make sure my WordPress site remains quick and responsive with no extra effort.
In reality many deployments are far more involved that this but hopefully this gives you a good, if basic, idea about how the models work and scale.
Altering resource constraints
Now, getting more involved, it doesn’t end there. In this example we haven’t discussed compute power, storage or networking. Juju can easily abstract all of this as well. In EC2 there are instance sizes which are given cryptic names, for example m4.small, m4.medium, m4.large. What does this mean? Well if you look on the Amazon website you can find out, but also how does this compare to other cloud services? They use different naming conventions so having a single model is hard if you want to specify instance sizes, right? How do I make sure the new servers on a different provider I start up are of similar size and power? Juju maps the available machines across different providers, so that if I say in Amazon “give me a machine with 2 cores, 4GB ram and 32GB of disk space” Juju will find the best match, then I decide I’m switching to Azure, and use the same constraints and it will start me up a server with, again, the closest match to my requirements. Ensuring that if I swap providers the performance changes should be minimal.
In my example earlier we also didn’t look at service placement. By default Juju will put each service on its own node. Of course this might be a waste of resources or just not viable if you are deploying in a resource constrained environment. You have a few options here, you can instruct Juju to deploy multiple services to the same “server”. Of course a server doesn’t need to by physical, it might be a virtual machine in the cloud, or a container instance in a data centre. What is often a nicer way to deploy applications on the same boxes but with a degree of separation, is the use of containers. So in this case I might tell Juju that I would like to deploy my Apache webserver to a LXC/LXD or KVM container running on my existing MySQL server which would reduce the need for spining up yet another server to host it, but still allow the service to run in its own operating system and run without other services having a direct impact on its operation.
You can also take this process further and into the realm of network topography. Going back to my original diagram, we had a WordPress instance that was fronted by HA Proxy. In this case HA Proxy is the front end, the place where visitors from the outside world will hit. As such it needs to be “exposed” to users, whilst the rest of the services remain behind the firewall. If you then tweak the subnets and available networks you can in effect you have create a DMZ zone, an Apps Zone and a PCI zone where the database sits. To do this we can use Juju zones, which whilst not supported on all providers can provide different subnets for different application spaces. Apart from the expose aspect and defining a couple of application zones, you’ve done nothing, but you can use the tools to help define effective networking areas that operate in exactly the same way had you done it manually.
Onsite and single instance deployment
You don’t need to be using a fancy cloud provider, there are other options. Manual deployment allows you to deploy charms to machines without the automatic spinning up of new servers. This doesn’t mean though that you lose all of the flexibility, you can still make excellent use of unit colocation using the LXD containers which keeps the services separate and running smoothly. If you have a number of servers you can also configure Juju to run in Metal As A Service mode which will give you the managed deployment properties of the cloud provider but without your data going off site.
Finally if you want the flexibility of cloud but without offloading all of your data to a cloud provider. Juju can spin up and manage a full OpenStack cluster running on your own hardware, which gives you the ability to concentrate on the essential services and less about the underlying infrastructure.
How We Can Help
In a future post we will look deeper at a Juju Charm and look at how it manages state, actions and events.
In the mean time if you are interested in Juju and what it can help solve your application modelling issues, get in contact and we can discuss your problems in greater detail. We can then work on a small proof of concept or demo that can demonstrate how Spicule and Juju can make your life easier and more streamline. I’ve said this before, but Juju allows users to concentrate on what matters and not the nitty gritty deployment details and that’s what makes it great.