Thursday, November 06, 2014

Fabric8 version 2.0 released - Next Generation Platform for managing and deploying enterprise services anywhere

The Fabric8 open source project started 5 years ago, as a private project aimed at making large
deployments of Apache Camel, CXF, ActiveMQ etc easy to deploy and manage.

At the time of its inception, we looked at lots of existing open source solutions that we could leverage to provide the flexible framework that we knew our users would require. Unfortunately, at that time nothing was a good fit, so we rolled our own - with core concepts based around:

  • Centralised Control
  • runtime registry of services and containers
  • Managed Hybrid deployments from laptop, to open hybrid (e.g. OpenShift)

All services were deployed into a Apache Karaf runtime, which allowed for dynamic updates of running services. The modularisation using OSGi had some distinct advantages around the dynamic deployment of new services, and container service discovery, and a consistent way of administration. However, this also meant that Fabric8 was very much tied to the Karaf runtime, and forced anyone using Fabric8 and Camel to use OSGi too.

We are now entering a sea-change for immutable infrastructure, microservices and open standardisation around how this is done. Docker and Kubernetes are central to that change, and are being backed with big investments. Kubernetes in particular, being based on the insurmountable experience that google brings to clustering containers at scale, will drive standardisation across the way containers are deployed and managed. It would be irresponsible for Fabric8 not to embrace this change, but to do it in a way that makes it easy for Fabric8 1.x users to migrate. By taking this path, we are ensuring that Fabric8 users will be able to benefit from the rapidly growing ecosystem of vendors and projects that are providing applications and tooling around Docker, but also frees Fabric8 users to be able to move their deployments to any of the growing list of platforms that support Kubernetes.  However, we are also aware that there are many reasons users have to want to use a platform that is 100% Java - so we support that too!

The goal of Fabric8 v2 is to utilise open source, and open standards. To enable the same way of configuring and monitoring services as Fabric8 1.x, but to do it for any Java based service, on any operating system. We also want to future proof the way users work, which is way adopting Kubernetes is so important: you will be able to leverage this style of deployment anywhere.
Fabric8 v2 is already better tested, more nimble and more scalable than any previous version we've released, and as Fabric8 will also be adopted as a core service in OpenShift 3, it will hardened at large scale very quickly.

So some common questions:

Does this mean that Fabric8 no longer supports Karaf ?
No - Karaf is one of the many container options we support in Fabric8. You can still deploy your apps in the same way as Fabric8 v1, its just that Fabric8 v2 will scale so much better :).

Is ZooKeeper no longer supported ?
In Fabric8 v1 - ZooKeeper was used to implement the service registry. This is being replaced by Kubernetes. Fabric8 will still run with Zookeeper however, to enable cluster coordination, such as master-slave elections for messaging systems or databases.

I've invested a lot of effort in Fabric8 v1 - does all this get thrown away ?
Absolutely not. Its will be straightforward to migrate to Fabric8 v2.

When should I look to move off Fabric8 v1 ?
As soon as possible. There's a marked improvement in features, scalability and manageability.

We don't want to use Docker - can we still use Fabric8 v2?
Yes - Fabric8 v2 also has a pure Java implementation, where it can still run "java containers"

Our platforms don't support Go - does that preclude us from running Fabric8 v2 ?
No -  although Kubernetes relies on the Go programming language, we understand that won't be an option for some folks, which is why fabric8 has an optional Java implementation. That way you can still use the same framework and tooling, but it leaves open the option to simply change the implementation at a later date if you require the performance, application density and scalability that running Kubernetes on something like Red Hat's OpenShift  or Google's Cloud Platform can give you.

We are also extending the services that we supply with fabric8, from metric collection, alerting, auto-scaling, application performance monitoring and other goodies:



Over the next few weeks, the fabric8 community will be extending the quick starts to demonstrate how easy it is to run micro services, as well application containers in Fabric8. You can run Fabric8 on your laptop (using 100% Java if you wish), or your in-house bare metal (again running 100% Java if you wish) or to any PaaS running Kubernetes.



Friday, March 07, 2014

Fuse At the DevNation Conference!



The JBoss Fuse engineering team have sponsored and organised CamelOne for the last 3 years, but after CamelOne 2013, the opportunity came up to put all the effort into a new developer conference, sponsored by Red Hat called DevNation. This is the first time the event has been run, and its a great opportunity to learn about all aspects of development and deployment. CamelOne was focused on Apache projects used for integration, but that in itself is quiet limited, and as an integration developer, you have to be able know about so much more. DevNation is an opportunity to learn from like minded developers about all aspects of real world deployments, from Hadoop to elastic search, from best practices in DevOps or OSGi,  to getting an insight into Docker, Apache Spark, Elastic Search and so much more. DevNation has a lot of promise to be a great developer conference, with a broad scope that will be informative and fun. Its for this reason that the fuse team decided to focus our attention on DevNation this year, rather than CamelOne.

The traditional way of delivering applications is outdated. Many users are rolling out across hybridised environments, and the need to be insulated from all the different environments,  to have location independence and the ability to dynamically deploy, find and manage all your integration services is going to be the key theme for the Fuse tracks at DevNation - as well as all the usual tips, tricks and secret ninja (OK undocumented) stuff that we like to share with the attendees.

DevNation this year is being held in San Francisco, and will run from Sunday April 13 - 17. You can register here - and we really hope to see you there!

Tuesday, December 17, 2013

One Technology Trend for 2014: "The Internet Of Things"

I was reading some online articles and came across Technology Trends for 2014 - the number one being the 'Internet of Things' or IoT for short. This isn't exactly a new concept, - the promise of smart homes, where everything from intelligent lights to A.I. for washing machines that can be monitored remotely have been around for a while. And how couldn't resist the concept of a smart fridge that can stock itself? The term Internet of Things has been around for over decade, being firstly proposed by Kevin Ashton whilst Auto-ID Center at MIT, primarily driven by an interest in RFID, but the ideas and uses cases for an Internet of Everything has taken a while to mature.

The drivers behind the IoT have been several:  The demand for renewable energy means that smart grids have to be able to monitor and respond to demand in electricity generation in a more agile manner - allowing for bi-directional energy supply from small energy producers (potentially you and me) - requires smart metering and monitoring. There's the exponential growth of smart phones  - more people are always connected, and that trend will continue.

However, when the IoT was first envisaged all those years ago, there were some technology inhibitors:

1. The limitation of IPv4 in terms of the number of physical addresses that were available
2. The capacity of the internet for a fully connected IoT
3. The ability for mediators to scale to millions of concurrent connections
4. The ability to  store and analyse the data in a scalable way
5. The ability to analyse all the data to make sensible decisions in a timely manner.

Fast forward to today and we have most of these things from a technology perspective either solved (e.g. IPV6) or the pieces are available, and Red Hat is ideally placed to provide the whole solution for a scalable backend for the IoT - and to do it all on open source software.

Firstly, we need the ability to provide a standards based, horizontally scalable solution for handling connectivity to hundreds of thousands of concurrent connections. JBoss A-MQ is combining the best of Apache licensed middleware solutions from Apache ActiveMQ, QPid and HornetQ to form a highly scalable messaging solution that supports MQTT, AMQP,  WebSockets and STOMP.

The IoT will generate a lot of unstructured data, which needs to be correlated and analysed, and one of the leading NoSQL solutions for doing this is Hadoop. If you want Hadoop to scale and perform, then bester infrastructure to run it on is a combination of GlusterFS and OpenStack.

Getting real time data into Hadoop's HDFS can be problematic, but  JBoss Fuse already has some of the best solutions for doing just that.

Finally, if you want to use complex event processing, to make decisions based on the flow of data from your connected devices based on causality and temporal logic, then JBoss BRMS is the best open source solution on the market.

Red Hat  is going to be right at the centre of  IoT solutions in 2014.


Friday, September 06, 2013

Apache Camel Broker Component for ActiveMQ 5.9


Embedding Apache Camel inside the ActiveMQ broker provides great flexibility for extending the message broker with the integration power of Camel. Apache Camel routes also benefit in that you can avoid  the serialization and network costs of connecting to ActiveMQ remotely - if you use the activemq component.

One of the really great things about Apache ActiveMQ is that it works so well with Apache Camel.

If however, you want to change the behaviour of messages flowing through the ActiveMQ message broker itself you will be limited to the shipped set of ActiveMQ Broker Interceptors - or develop your own Broker plugin - and then introduce that as a jar on to the class path for the ActiveMQ broker.

What would be really useful though, is to combine the Interceptors and Camel together - making it easier to configure Broker Interceptors using Camel routes - and that's exactly what we have done for upcoming ActiveMQ 5.9 release with the broker Camel Component. You can include a camel.xml file into your ActiveMQ broker config -  and then if you want to take all messages sent to a Queue and publish them to a Topic, changing their priority along the way - you can do something like this:

A few things worth noting:

  • A broker component only adds an intercept into the broker if its started - so the broker component will not add any overhead to the running broker until its used - and then the overhead will be trivial.
  • You intercept messages using the broker component when they have been received by the broker - but before they are processed (persisted or routed to a destination).
  • The in message on the CamelExchange is a Camel Message, but also a JMS Message (messages routed through ActiveMQ from Stomp/MQTT/AMQP etc. are always translated into JMS messages).
  • You can use wildcards on a destination to intercept messages from destinations matching the wildcard.
  • After the intercept, you have to explicitly send the message back to the broker component - this allows you to either drop select messages (by not sending) - or, like in the above case - re-route the message to a different destination.
  • There is one deliberate caveat though,  you can only send messages to a broker component that have been intercepted - i.e.  routing a Camel message from another Component (e.g. File) would result in an error.
There are some extra classes that have been added to the activemq-broker package - to enable views of the running broker without using JMX - and to support the use of the broker component:
org.apache.activemq.broker.view.MessageBrokerView - which provides methods to retrieve statistics on a the broker, and from the MessageBrokerView - you can retrieve a org.apache.activemq.broker.view.BrokerDestinationView for a particular destination. This means you can add flexible routing inside the broker by doing something  like the following - to route messages when a destination's queue depth reaches a certain limit:

This is using the Camel Message Router pattern - note the use of Spring expression language (spel) in the when clause.


Wednesday, August 21, 2013

Fuse days are back

One thing I get constantly asked about is the Fuse days that FuseSource used to do around Europe and the US, and now FuseSource is part of Red Hat would they still be happening? Well the answer is an emphatic YES! After taking some time to settle in and find out where the tea bags are hidden in the Red Hat middleware group its time to start things rolling again. We have been working out the messaging and integration strategy and will be have an engineering face to face meeting in Dublin, Ireland in week beginning the 23rd September 2013. Its short notice - but we could hold an impromptu Fuse day in Dublin that week.
You may even get to find out what we are doing in 2014 before the engineers!

Drop me a line if you want to attend - it'll be free - you just have to get yourself to Dublin. I'll be posting dates for upcoming Fuse days in Europe and the US over the next couple of weeks - now where's my cup of tea ...

Friday, May 31, 2013

Connecting Applications Everywhere with ActiveMQ

This year at CamelOne there's going to be some exceptional presentations,  but I'm also presenting "Connecting Applications Everywhere with ActiveMQ".

The focus of this presentation is to demonstrate the many protocol options and deployment scenarios that are available to Apache ActiveMQ. After an introduction to the Apache ActiveMQ project, and presenting why the "Internet of Things"is going to be driving the agenda for integration and messaging over the next 5 years, I'll be demonstrating an example application, going from an Arduino Microprocessor using MQTT to a MQTT/AMQP ActiveMQ gateway and to an ActiveMQ broker that will service HTML clients over WebSockets - something like this:








With so many linked components in a live demo - what could go wrong ? ;)

I'm hoping to catch up with lots of folks at CamelOne - if you haven't registered, its not too late!

Update:

Unfortunately at this year's CamelOne  there wasn't any video recordings of sessions. Sharing slides is easy, but you do a demo you need video. Luckily I managed to re-create the whole CamelOne presentation and demo in a DevZone webinar - the video of which is below:

Wednesday, May 22, 2013

What to look forward to at CamelOne, June 10-11th, 2013!

This year is going to be the third CamelOne, and its going to be quite different from previous
CamelOne conferences.

Firstly the we have a new host, Red Hat who have kindly agreed to host CamelOne at the Hynes Convention Center, at the same time as JudCon and Red Hat Developer Exchange. This means people attending will be able to move between these different events and pick n' mix what they go to.
It also means there will be a chance for attendees to mingle - and see what's happening on both sides of the open source fence. 

The second thing you'll notice, is that is going to be a very strong emphasis on open source projects. I've no doubt the occasional product may get a mention, but the aim of CamelOne this year is to educate and share experiences of using the best open source integration software out there.  If you look at the agenda, you will see there's a real mix of customer experience stories and best way to use the Apache projects to be successful.

Thirdly, the overall theme is going to show how you the direction integration projects from the ASF (Apache Camel, ActiveMQ, CXF, Karaf, ServiceMix) are taking to address the integration needs for the next five years: The internet of things, the proliferation of cloud API's, and mobile. Attendees will also see the direction that the Fuse Engineering team inside Red Hat are developing for future projects, in particular better management and  Cloud-based Integration.

This is going to be the best CamelOne yet - we are expecting record attendance - and I look forward to seeing you there!