- 18th November 2016
- Posted by: Stephen Downie
- Category: DevOps, JuJu
So its been a long year of travel and talking, but eventful and fruitful all the same. Many thanks to Mark, Jorge and co at Canonical for making it happen and allowing me to travel around to various random places to talk about the amazing technology they have built.
Friday was an eventful trip to Antwerp for the Pentaho Community Meetup and was nice to meet up with a bunch of familiar faces many of whom were interested in what Juju had to offer in terms of data processing and manipulation. A few people had already fiddled with it and were after more information, many hadn’t but came and asked questions later. Some take aways and thoughts from speaking to people in Antwerp:
- People in BI already run Mesos clusters (who knew!)
- It would be cool to see Juju leveraged by ETL tooling to deploy ETL workloads across the cloud or MAAS dynamically
- Plug and play monitoring is a major bonus
After talking and drinking (this was Antwerp after all) I dragged my aching liver across to Seville for ApacheCon Big Data EU which was great. Canonical had a booth there with some familiar faces (Pete Vander Giessen, you are a top guy although possibly too nice for your own good!), and I had 2 talks scheduled.
Talk #1 was ETL pipelines and processing with Apache OODT and Solr. A good turnout to my talk none of whom had heard of Apache OODT, which demonstrates how many people pick randomly to learn new tech and I think that’s cool. The talk ran over what we have done at NASA JPL to build some data processing pipelines which incorporate a catalog and archive to allow for fast data discovery and dissemination.
My second talk was Scalable Big Data using Juju and Apache Drill. Again, good turn out to what is a pretty random talk for developers. We discussed the complexities of deployment for Apache Hadoop and other distributions outside of a VM or SaaS type environment. We then ran through deploying Big Top and linking it to Apache Drill to provide SQL over HDFS capabilities. The relations and metadata in Juju make standing up and scaling these types of environments so much easier and it was nice to see how interested people were in what you can achive in 20 lines of bash or a few clicks of a mouse. People showed great interest in what was achievable with Juju and the existing charms and what else could be hooked up to the platforms, development environments etc. Also what the plans were for future data processing applications that could leverage the deployment of these technologies. Lets just say the list isn’t short. Apache Hawq, Apache Geode, Apache Ignite etc all make sense to appear on Juju over the next 12 months and I’m looking forward to engaging those communities in helping charm up their software for easy deployment at scale.