Aloha from Hawaii! Shame on me to not blog for months. Well, honestly I have been quite busy during the past few months. With the big move to Hawaii in December and the recent launch of Floify kept me occupied. Although moving to a new place is fun, getting settled on an island can sometimes be more stressful. Now all that is taken care, there is no good reason to not blog from rainbow nation 🙂

Recently, I was working with an evaluator implementing a proof of concept where they had a requirement for scheduling workflows from a web service request. In Flux, this can be easily implemented as a trigger which typically waits for an event to occur. Flux supports popular triggers out of the box, which are either based on timer or database condition or file or audit trail or mail. While ago, I implemented a custom web service trigger which supports both SOAP and HTTP requests and expedites the workflow to the subsequent step. The sample testcase below shows how web service trigger can be implemented in your Flux workflows.

The sample client code that expedites the trigger is shown below.

With this plugin, your workflows can be designed to trigger based on web service requests and it also allows users to configure an embedded web server that runs part of the workflow. If this were one shot workflows, this nicely fits the bill. If the requirement were to support a recurring workflow, I would classify this approach as heavy weight mainly because it spins a web server for each run of your workflow which may not be ideal for high performance workflows. It makes more sense to reuse a single web server instance that accepts requests and triggers a workflow template from the repository. I do not believe this can be easily supported without making some core changes in Flux. But, it is not the end of the world, there is a more efficient way to implement this in Flux today by using Flux Cluster APIs.

Flux 7.11 Operations Console exposes set of APIs that allows clients to talk to Flux engine cluster via simple HTTP interface. In essence, Flux Opsconsole acts as a web service trigger for the cluster. You can find the documentation of these APIs here. The API that schedules a workflow template from repository will be available in 7.11.4. If you would like to try out this, you can request a 7.11.4 build from Flux support, we would be happy to send one on your way. The Operations Console deploys HTTP resources that can be accessed in a number of ways. If you were a Java shop, you would consider something similar shown in the Gist below. This sample uses Jersey APIs to POST a multipart request to the endpoint. You just need to toss in restaction.jar which is part of the Flux distribution.

In this example, we add three properties to a multipart request. The first one “template” is required and specifies the name of the template in the repository. The “category” and “rate” are optional variables that will be made available in your workflow instance. You can add as many data points that you would like to pass on to your workflow instance. You can also optionally customize the name of the workflow instance that you would like to spin off from the repository by setting the “namespace” property in the request.

There is another API that might interest you as well. This API schedules a given flow chart to the engine. The major difference here is your are exporting a workflow file to the engine instead of spinning an instance from existing repository template.

Enjoy developing in Flux and Mahalo for stopping by!

Possibly Related Posts:


In my earlier blog entry on Jersey, I used HTTPClient API and curl command line utility as the clients. I had not mentioned about the Jersey Client API. It is part of the Jersey distribution. I would prefer using Jersey Client API as it is modeled around the concepts of JAX-RS spec. Let us quickly re-write our client using the Jersey API and see how easy is to write a client in Java.


import com.sun.jersey.api.client.Client;
import com.sun.jersey.api.client.WebResource;
import restful.impl.jaxb.MovieCollection;
import restful.impl.jaxb.MovieDetails;

import static java.lang.System.out;

public class JerseyClient {
    public static void main(String args[]) {
        Client client = Client.create();
        WebResource r = client.resource("http://localhost:9090/boxoffice/movies");
        MovieCollection movies = r.accept("application/xml").get(MovieCollection.class);
        for (MovieDetails movie : movies.getMovie()) {
            out.println("Title : " + movie.getTitle());
            out.println("Genres : " + movie.getGenres());
            out.println("Directed By : " + movie.getDirectedBy());
            out.println("Rank : " + movie.getRank());
        }        
    }
}

This program will print all the top box office movie details in the console. I will be discussing more about this client API in future posts.

The Jersey user forum is the first place to check for any Jersey related issues. The Jersey team is amazingly helpful in resolving any issues. You can also visit Paul’s blog here, Marc’s blog here and Jakub’s blog here. You can bookmark these blogs as they are really informative. Grab the Jersey team’s JavaOne 2008 presentation slide deck here.

You can download the NetBeans project for this sample movie application here.

Possibly Related Posts:


Message brokering markets were once dominated by heavy weights and required huge investments by enterprises for implementing such solutions. Vendors made huge bucks out of it by selling such solutions and support. Is this still considered a niche market? I personally don’t think so. I am definitely not against vendors offering such solutions, but the point I am making here is whether this market still deserves huge investments when mature open source alternatives are available.

You possibly have two options. Buy an OTS solution and forget about maintenance nightmare, typically argued by the so called “Enterprise Architects” because they assume it’s a safe bet. Such decisions make your enterprise tightly coupled with these kinds of solutions. You may never know sometimes you end up paying huge bucks for feature requests and lobbying for license costs and finally stuck with the custom solution for years. This choice comes at a huge price, but at some convenience as one need not blame you for lack of a feature or offering a buggy solution. Does this convenience worth anymore?

Historically, enterprises chose such solutions mainly because of the support and maintenance that comes bundled with the solution and they always awarded such contracts to market leaders. Finally, when the vendor decides to release newer versions of the product, customers had to upgrade and sometimes forcefully migrated to a newer version, just because earlier version does not scale or does not support a feature or possibly discontinued.

On the contrary, some enterprises (mostly small to mid-size) consider the option of implementing such messaging solution using open source alternatives and people who make this choice need to be really smart because of the liability that comes with their decision. In most situations, such decisions are made by experts who are aware of the complexity involved in such undertaking. For most part it will be straightforward to adopt such a solution and sometimes it could be painful when there is lack of technical expertise within the enterprise. But, it will definitely be a rewarding experience when working with smart people in such integrations. The open source community offers their best support in resolving any issues. This option will surely benefit enterprises in terms of licensing and support costs. But, the key assumption here is that you bet on your expertise on such engagements.

ActiveMQ 5.1Open source solutions are increasingly becoming a compelling choice for enterprises because of their mature feature set and wide adoption backed by strong user community. ActiveMQ is one such project in ASF where you see tons of features available to its users. I had experience working on similar commercial products, but ActiveMQ is absolutely mind blowing. Simplicity wins the heart of enterprise developers. Some commercial offerings take days to install and configure which requires special hardware (high-end servers) and sometimes on-site training from vendor. With projects like ActiveMQ, it is becoming more and more productive for teams to develop, test and roll out integration solutions in a shorter period of time. It just takes minutes to install and configure ActiveMQ in your desktop.

I recently downloaded ActiveMQ and played around with it. Let us analyze their startup messages and dissect some of them and see what they offer. This is just a tip of an iceberg.


 (1) D:\\apache-activemq-5.1.0\\bin>activemq.bat
 (2) ACTIVEMQ_HOME: D:\\apache-activemq-5.1.0\\bin\\..
 (3) ACTIVEMQ_BASE: D:\\apache-activemq-5.1.0\\bin\\..
 (4) Loading message broker from: xbean:activemq.xml
 (5) INFO  BrokerService                  - Using Persistence Adapter: AMQPersistenceAdapter(D:\\apache-activemq-5.1.0\\bin\\..\\data)
 (6) INFO  BrokerService                  - ActiveMQ 5.1.0 JMS Message Broker (localhost) is starting
 (7) INFO  BrokerService                  - For help or more information please see:http://activemq.apache.org/
 (8) INFO  AMQPersistenceAdapter          - AMQStore starting using directory: D:\\apache-activemq-5.1.0\\bin\\..\\data
 (9) INFO  KahaStore                      - Kaha Store using data directory D:\\apache-activemq-5.1.0\\bin\\..\\data\\kr-store\\state
(10) INFO  AMQPersistenceAdapter          - Active data files: []
(11) INFO  KahaStore                      - Kaha Store using data directory D:\\apache-activemq-5.1.0\\bin\\..\\data\\kr-store\\data
(12) INFO  TransportServerThreadSupport   - Listening for connections at: tcp://nandi:61616
(13) INFO  TransportConnector             - Connector openwire Started
(14) INFO  TransportServerThreadSupport   - Listening for connections at: ssl://nandi:61617
(15) INFO  TransportConnector             - Connector ssl Started
(16) INFO  TransportServerThreadSupport   - Listening for connections at: stomp://nandi:61613
(17) INFO  TransportConnector             - Connector stomp Started
(18) INFO  TransportServerThreadSupport   - Listening for connections at: xmpp://nandi:61222
(19) INFO  TransportConnector             - Connector xmpp Started
(20) INFO  NetworkConnector               - Network Connector default-nc Started
(21) INFO  BrokerService                  - ActiveMQ JMS Message Broker (localhost, ID:nandi-64041-1211065443740-0:0) started
(22) INFO  log                            - Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
(23) INFO  log                            - jetty-6.1.9
(24) INFO  WebConsoleStarter              - ActiveMQ WebConsole initialized.
(25) INFO  /admin                         - Initializing Spring FrameworkServlet 'dispatcher'
(26) INFO  log                            - ActiveMQ Console at http://0.0.0.0:8161/admin
(27) INFO  log                            - ActiveMQ Web Demos at http://0.0.0.0:8161/demo
(28) INFO  log                            - RESTful file access application at http://0.0.0.0:8161/fileserver
(29) INFO  log                            - Started SelectChannelConnector@0.0.0.0:8161
(30) INFO  FailoverTransport              - Successfully connected to tcp://localhost:61616

The following features are enabled by default when an ActiveMQ broker is started.

Line 8 – AMQStore is the default storage for AcitveMQ 5 and above.
Line 11 – KahaStore is an optimal performance storage solution used for message persistance.
Line 12 – OpenWire is a cross language Wire Protocol which allows native access to ActiveMQ from a number of different languages and platforms.
Line 14 – SSL transport allows clients to connect to a remote ActiveMQ broker using SSL over a TCP socket.
Line 16 – Stomp is used by non-Java clients talk to ActiveMQ server and other message brokers.
Line 18 – XMPP is used to connect to the broker & send and receive messages (Jabber).
Line 26 – ActiveMQ Web Console for managing the broker services such as queues, topics and subscriptions.
Line 28 – REST-ful API to messaging (JMS). Browsing of queues implemented using pluggable views such as ATOM and RSS feeds.
Line 30 – Failover transport used for reconnection.

There are tons of other features which is beyond the scope of this discussion. Some of the notable features include embedded message broker which comes handy in unit testing. You can refer to its complete feature set here. There are sub-projects within ActiveMQ such as NMS (for .NET) and CMS (for C++) which provides unified access to ActiveMQ from other programming language environments. Camel is another sub-project which implements enterprise integration patterns. Its a topic of interest for another blog entry.

This is more than compelling to adopt this solution. Enterprises sometimes face real complexity in terms of cost of ownership issues when trying to adopt such open source solutions. But, this could be overcome by choosing support offerings from companies like IONA, Covalent (now part of SpringSource) and OpenLogic. This may still work out to be cost effective when compared to OTS solutions. The market should favor such healthy adoption.

Possibly Related Posts:


Effective Java second edition finally started shipping this week. I dreamed of this day in one of my earlier blog post 🙂 As predicted, it finally made it this year during JavaOne. Even this year, Joshua had a session on “More ‘Effective Java'” – third in a row since 2006.

The book was published on May 8, 2008 and I pre-ordered at Amazon thinking the book will hit the stores only on May 28 as per their website. But, I then decided to order it at informIT as it was available immediately (not to mention the best deal I could get on the Internet). I should be getting it next week. You can read sample chapters from here and here. I had gone through the Generics chapter and its pretty impressive.

As always, its definitely a programmer’s asset. This second edition is reloaded with 21 more best programming practice primarily from JDK 1.5. You can read Joshua Bloch’s recent interviews at InfoQ and java.sun.com.

You can even buy an autographed version from craigslist. It comes at a premium of $50. Josh is always a rock star in the Java community.

Possibly Related Posts:


A Google Code project on detecting singletons in Java code has released its code under Apache Software License. This tool is useful for detecting singletons based on their usage pattern and these patterns are classified as {S, H, M, F}+ingleton. The author introduces these new terminologies based on whether the Singleton is implemented in a classic, helper, method, field based approach. The details of this can be learned from the project website. The output is written to a GraphML format which could be viewed using yEd. This provides a visual representation of singletons and their class references with in a library. This helps developers to fix the unwise way of using Singletons. Here is a sample graph generated from Google Singleton Detector (GSD) for DOM4J library.

GSD generated GraphML representation for DOM4J library

DOM4J GraphML

Singletons were once acclaimed the most used creational design patterns in Java. In the past few years, the usage has been mostly restricted by developers due to its lack of testability in the agile and test driven development world. GSD is at its early stage and has some known limitations. But, this is a good start and niche tool for Java developers to detect singletons in legacy projects and can re-factor if needed.

As defined by GoF in “Design Patterns: Elements of Reusable Object-Oriented Software“, the purpose of Singleton pattern is to ensure a class has only one instance, and provide a global point of access to it. Java has widely adopted this pattern since its inception. There are many arguments which discourage the use of Singletons in Java as they introduce global state which makes it difficult to test. Also, there are some known issues around double-checked locking pattern (DCLP) as there is no guarantee it will work on single or multi processor machines due to out of order writes implemented in JVM prior to 1.5. Double-checked locking idiom was used to avoid expensive synchronizations in the Singleton implementation. But, over time this idiom was proven to be ineffective due to its limitations until the memory model was revised in 1.5. So, the fallback option prior to 1.5 was to accept synchronization or use a static field. The classical singleton implementation in Java using static field is shown below.


public class Singleton {
    private static Singleton _instance;
    private Singleton() {
    }

    public static Singleton Instance() {
        if (_instance == null)
            _instance = new Singleton();
        return _instance;
    }
}

This design ensures that only one instance of Singleton object is ever created. The constructor is declared private and the Instance() method is declared static which creates only one object. This implementation holds good for a single-threaded program. However, when multiple threads are introduced, it is possible that Instance() method to return two different instances of the Singleton object. Synchronizing Instance() method would solve this issue at an extra performance overhead. But, it eliminates our threading issues. The above code is modified to support synchronization.


public class Singleton {
    private static Singleton _instance;
    private Singleton() {
    }

    public static Singleton Instance() {
        if (_instance == null) {
            synchronized (Singleton.class) {
                _instance = new Singleton();
            }
        }
        return _instance;
    }
}

Even with this approach, it is not guaranteed that only one instance of Singleton object would be created because of the blocking nature of synchronization. Both the threads wait to acquire the lock on the object and end up creating two Singleton objects violating the purpose of the pattern. To circumvent this problem, double-checked locking idiom was introduced. The following code shows the double-checked locking implementation of Singleton pattern.


public class Singleton {
    private static Singleton _instance;
    private Singleton() {
    }

    public static Singleton Instance() {
        if (_instance == null) {
            synchronized (Singleton.class) {
                if (_instance == null) {
                    _instance = new Singleton();
                }
            }
        }
        return _instance;
    }
}

DCLP does not allow multiple threads to create multiple Singleton objects. Instead, they are not guaranteed to work on either single or multi-processor machines due to the out of order writes observed in various JVM implementations prior to 1.5. The most obvious reason is that the writes which initialize _instance and write to the _instance field can be reordered by the compiler or the cache, which would have the effect of returning partially constructed Singleton object. This would end up reading an uninitialized object, thus causing the problem in JVMs prior to 1.5.

Under the new memory model implemented as part of JSR 133 suggests making the _instance field volatile should solve the problem of DCLP. This is because the initialization of the Singleton object by the constructing thread happens-before the return of its value by the thread that reads it. This JSR refines the semantics of threads, locks, volatile variables, and final fields. This also proposes a replacement “Initialization On Demand Holder” for DCLP which is thread safe and simpler to use and avoids initialization issues.


private static class LazySingletonHolder {
    public static Singleton _instance = new Singleton();
}

public static Singleton Instance() {
    return LazySingletonHolder._instance;
}

Possibly Related Posts: