Skip to content
February 25, 2013 / Keyhole Software

My Experience With FireDaemon Pro


There’s a nifty piece of software I’ve run across called FireDaemon Pro – I’d love to share my experiences, as it’s a great tool to have in your tool pouch.

The basics of the software are simple: take just about any Windows application or script and turn it into a Service. To name a few, FireDaemon Pro handles 32- and 64-bit EXE and DLL applications, BAT, CMD, Perl and Java. Before I rattle off more of the bells and whistles, let me give you some context on what we were doing at the time, and how we ended up using this product.

The Situation

A teammate and I were working in a small IT shop that provided services for a federally regulated industry. We were part of a team of four that supported and implemented solutions for day-to-day operations. Each of our roles were siloed by the business teams we supported. My teammate had been there longer than I and had developed quite a few .NET applications of various flavors, while my predecessors had come from more of an “anything but Microsoft” philosophy. I had just come off of my first .NET project and was anxious to continue to explore the world of .NET . At some point, we decided for any new development, we would implement solutions in .NET for ease of cross-training, skill portability for projects spanning our business teams, and subsequent ease of support for each of our respective roles.

Fast-forward several months and my teammate had implemented a prototype serial port application that received time-sensitive federally regulated data, and saves the data to a database for another process to pick up and push into a customer-facing system. Serial port implementation in a .NET application is fairly easy to accomplish, but had its challenges in a service-oriented implementation. With time and resource constraints, not to mention “money savings to be had” pressure, my teammate had created a forms application to get the proof-of-concept up and going. Unfortunately/fortunately our internal customers were so pleased with the process, they pushed for it to be implemented into production as-is. At the time it didn’t seem to be a big deal. We’ll run it as a forms application on the production server with the appropriate security. After all, there are only so many IT resources and projects were stacked high and tight, so better to implement as-is and move on…right?

Let’s just say many 3 AM phone calls later, it was starting to be a bit on the painful side.

While the above project was in-flight and a few months ahead, I was off and running on a new development project where I wanted to leverage the Microsoft Office PIA’s in a .NET application to process Excel files generated by third-party software (I know, I know … Apache POI … believe me, I know. .NET was just so shiny at the time). My teammate had not yet been bit by the “put it in production as-is” direction, so when I started to create my application, I followed suit and also developed a Forms application to do my testing with.

Long story short, I ended up finding out the hard way that Microsoft does not support a service implementation using their PIA’s. So, I was faced with implementing a Forms application in production.

It was about that time that one of our server guys (who had been involved with the various issues brought to light by running a Forms application on a production server  – i.e. not being able to do a server bounce without manually starting an application) introduced us to FireDaemon Pro. It was the cure for what ailed us. With a small change to the forms application and about 15 minutes worth of install and configuration time on the server, Forms application turned service worked flawlessly.

No more manual starts, no more 3 AM calls.

Fire Daemon Overview

Something great about FireDaemon Pro is that it has all sorts of configuration choices: automatic re-start, email upon failure or re-start, and automatic pop-up handling, to name a few.

Here’s a more complete feature list, from the FireDaemon Pro website:

  • Monitor and restart your app if it crashes, hangs or shuts down
  • Schedule your app to start/restart at specific times and dates – before you log in
  • Ability to start and run your application in the background without user intervention.
  • Ability to run your application continually across multiple user sessions.
  • Run your application either interactively or non-interactively.
  • Restarts your application in the event of failure, unintentional or malicious shutdown or at scheduled predetermined intervals.
  • Ability to modify your application’s priority and bind to specific processors or cores.
  • Execute additional transient programs during the service lifecycle.
  • Control, log and close popups that your application might display.
  • Assists in meeting various government regulations, Acts and standards pertaining to computing systems robustness, security, management, access and control (eg. Sarbanes-Oxley (SOX), ITIL).

Aside from having to find this product in the situation we did, I’m glad to have had the experience. This is an excellent tool to have in your tool pouch, so make sure to check it out more in detail here. For more how-tos, check out its features page

— Keith LaPee,

February 25, 2013 / Keyhole Software

A Look Into AngularJS – The “Super-heroic JavaScript MVW Framework”

With the growth and strength of HTML5 and the increasing performance in modern browsers, many JavaScript frameworks have been created to help develop rich client applications. These frameworks/libraries have given developers a huge toolkit to build enterprise complexity into client-side applications. Server side frameworks are becoming a thing of the past and being replaced with applications written in Backbone, Ember, AngularJS, Knockout, etc.

So why am I talking about AngularJS over frameworks/libraries like Backbone, Ember, or Knockout?

For me, the major points of separation in AngularJS’s favor are the following:

    • Good documentation
    • Write less code to do more
    • Backed by Google
    • Good developer community
    • Simple Data-Binding
    • Small footprint

What I am not doing is a side-by-side comparison of the top contenders in this area – we’ll save that for a future blog, by me or one of my colleagues. The goal of this post is to pipe your interest and run through a few key features of AngularJS – the “Super-heroic JavaScript MVW Framework.” Let’s begin:

Key Features of AngularJS


The job of the Scope is to detect changes to model objects and create an execution context for expressions.
There is one root scope for the application (ng-app) with hierarchical children scopes. It marshals the model to the view and forwards events to the controller.

Take a look at my simple example of $scope (Example presented with Plunker.)


The Controller is responsible for construction of the model and connects it to the view (HTML). The scope sits between the controller and the view. Controllers should be straightforward and simply contain the business logic needed for a view. Generally you want thin controllers and rich services. Controllers can be nested and handle inheritance. The big difference in AngularJS from the other JavaScript frameworks is there is no DOM manipulation in controllers. It is something to unlearn when developing in AngularJS.

A look at the controller:


In AngularJS, a Model is simply a JavaScript object. No need to extend anything or create any structure. This allows for nested models  – something that Backbone doesn’t do out-of-the-box.


The View is based on DOM objects, not on strings. The view is the HTML. HTML is declarative – well suited for UI design. The View should not contain any functional behavior. The flexibility here is to allow for multiple views per Controller.


The Services in AngularJS are singletons that perform common tasks for web applications. If you need to share common functionality between Controllers, then use Services. Built-in AngularJS, Services start with a $. There are several ways to build a service: Service API, Factory API, or the $provide API.

Service example of sharing a list:

Data Binding

Data Binding in AngularJS is a two-way binding between the View and the Model. Automatic synchronizing between views and data models makes this really easy (and straightforward) to use. Updating the model is reflected in View without any explicit JavaScript code to bind them together, or to add event listeners to reflect data changes.

All my examples have had data binding in them, but here is a super simple example:


Now this is cool. AngularJS allows you to use Directives to transform the DOM or to create new behavior. A directive allows you to extend the HTML vocabulary in a declarative fashion. The ‘ng’ prefix stands for built-in AngularJS directives. The App (ng-app), Model (ng-model), the Controller (ng-controller), etc. are built into the framework. AngularJS allows for building your own directives. Building directives is not extremely difficult, but not easy either. There are different things that can be done with them. Please check out AngularJS’s documentation on directives.

Here is a simple stopwatch example:


The Filters in AngularJS perform data transformation. They can be used to do formatting (like I did in my Directives example with padding zeros), or they can be used to do filter results (think search).

Since I already did the formatting example, here is a search using a filter:


AngularJS has some built-in validation around HTML5 input variables (textnumberURLemailradio, checkbox) and some directives (required, pattern, minlength, maxlength, min, max). If you want to create your own validation, it is just as simple as creating a directive to perform your validation.

Here is an example using AngularJS’s built in validation:


Testing is a big concern for enterprise applications. There are several different ways to write and run tests against JavaScript code, thus against AngularJS. The developers at AngularJS advocate using Jasmine tests ran using Testacular. I have found this method of testing very straightforward and, while writing tests may not be the most enjoyable, it is just as importable as any other piece of developing an application.


I have enjoyed developing with AngularJS. I hope this post has, at the very least, convinced you to spend a couple of hours playing with AngularJS.

To start, spend some time going through the AngularJS tutorial. Then create your own Custom AngularJS Plunker and see how quickly client-side development can be. As I said at the beginning, AngularJS has a really good community and very clean documentation, which goes into much more detail than this post. Thanks to the AngularJS team for developing this framework.

— Josh McKinzie,

Sources / Useful Links

February 19, 2013 / Keyhole Software

Remove The Fluff With Google Guava

As a part of the “lazy” programmer club, I try to find and use things that have solved problems similar to mine. In Java, we all have small problems that need to be solved everyday. How many times has an Object.equals() method been implemented in single application? How many unit tests does it take to verify the Object.equals() logic works? The same thing applies to file I/O, reflection, Strings, and so much more. As trivial as some of these things seem, it is easy enough to miss a small detail. Time and lines of code can be saved by using something that already exists. In the end, productivity goes up and bugs go down.

I don’t want to spend a lot of effort trying to read code when it just isn’t necessary. I want to see method names that describe what they are doing. To my eyes, it is easier to understand what a method does when it is named as checkNotNull(T) than when the name only contains notNull(T). The intent of notNull(T) could be misinterpreted to mean to convert T to a not null value. It is little things that make code more descriptive, easier to read, and a better use of my time. I also don’t want to read a  lot of lines of code consisting of int i = 0;. All that fluff just confusicates the code and takes the focus away from the parts that really matter. It’s better to be resourceful.

Google Guava

Guava has become one of my favorite libraries. Guava was born from Google’s internal labs. The libraries contained in Guava were found to be the most useful utilities during development of their projects. Unlike some other utility frameworks, Guava takes advantage of Java features like generics and reflection to simplify and keep the use of the API clean. According to the Guava Wiki, Google releases new fixes and functions every three months. If you don’t keep up with the bleeding edge updates, be aware that deprecated methods are removed after 18 months. Although the frequent API changes may cause some code changes, it does help keep the framework clean.

A major goal of Guava is to make code more descriptive and a whole lot cleaner. Take Object.equals() for example. Instead of implementing equals() filled with a bunch of if statements, Guava enables you to fill equals() with a bunch of Objects.equal() methods. Which one sounds easier to create and which one sounds easier to read? The functionality that matters still exists and that is if the proper comparisons are being made. The syntactic fluff surrounding the important parts are removed. It is easier to verify if the correct properties are being compared when all you see is the comparisons being made. Where was Guava 10 years ago when I needed it?

Guava also provides utilities for more advanced operations. An immutable collections library is provided to help keep your data thread-safe, known, and consistent. Guava also tries its hand at some functional features. Since Java is not a functional programming language, it is easy to get into a heap of trouble when trying to fit Java into that paradigm. The Guava Wiki gives a warning to the user on how Functions and Predicates may be easily misused. As you peruse the list of utilities on Guava’s Wiki, you’ll see that the following examples are just a taste of what Guava offers to the “lazy” (read: resourceful) programmer. I only included some of the frequently used items in the Basic Utilities section. Hopefully this helps show that Guava isn’t just another utilities library that will boost your productivity.

Object Method Assistance

The Objects class helps the developer accurately and easily implement the equals(), hashCode(), toString(), and compareTo() methods. As you’ll witness, the descriptive nature of Guava is much easier to read instead of a bunch of if statements and variables. Even though we have confidence in our code, isn’t it nice to know the code used in Guava has been tested many more times than we could ever hope for?

Take a basic Employee entity:

public class Employee implements Comparable<Employee> {
    private long id;
    private String title;
    private String firstName;
    private String lastName;

    //getters and setters

Even for simple entity classes, it is a good idea to override the Object methods to properly represent the object instance.

public boolean equals(final Object obj) {
    if (obj == null || getClass() != obj.getClass()) {
        return false;
    if (this == obj) {
        return true;
    Employee otherEmployee = (Employee) obj;
    //I don’t want to read this...
    return (this.getId() != otherEmployee.getId()
    && (this.getLastName() == null) ? otherEmployee.getLastName() == null
    : this.getLastName().equals(otherEmployee.getLastName())
    && (this.getFirstName() == null) ? otherEmployee.getFirstName() == null
    : this.getFirstName().equals(otherEmployee.getFirstName())
    && (this.getTitle() == null) ? otherEmployee.getTitle() == null
    : this.getTitle().equals(otherEmployee.getTitle()));

That isn’t very much fun to write, read, or debug. The developer reading the main property comparison logic has to slow way down to understand what is going on. It isn’t very descriptive and is easily enough to get something wrong. Typos are harder to discover with all the syntax involved in the comparisons.

Let’s see how Guava can help us:

public boolean equals(final Object obj) {
    if (obj == null || getClass() != obj.getClass()) {
        return false;
    if (this == obj) {
        return true;
    Employee otherEmployee = (Employee) obj;
    return Objects.equal(this.getId(), otherEmployee.getId())
    && Objects.equal(this.getLastName(), otherEmployee.getLastName())
    && Objects.equal(this.getFirstName(), otherEmployee.getFirstName())
    && Objects.equal(this.getTitle(), otherEmployee.getTitle());

Isn’t that much more enjoyable to write and read? We “lazy” programmers like it. With Objects.equal(), it is very easy to comprehend the intent. It doesn’t take much time or effort to read. The ease of use should also reduce the chance of introducing a typo. Only the important pieces are left. It would be very easy to catch if a comparison was incorrectly written as Objects.equal(this.getLastName(), otherEmployee.getFirstName()).

Now on to the Object.toString() method implemented the traditional way:

public String toString() {
    return String.format(
    "Employee{Id=%d, lastName=%s, firstName=%s, title=%s}",
    getId(), getLastName(), getFirstName(), getTitle());
    //creates => Employee{id=1, lastNight=Smith, firstName=John, title=President}

Not too bad, but I don’t like working with String literals. No tool is going to let me know if I messed up the format. Object.toString() is a very handy helper when it comes to debugging and viewing the logs for information. So any help in writing these methods is just a boost to productivity.

An assist by Guava’s Objects.toStringHelper():

public String toString() {
    return Objects.toStringHelper(this).add("id", getId())
    .add("lastName", getLastName())
    .add("firstName", getFirstName()).add("title", getTitle())
    // Employee{id=1, lastNight=Smith, firstName=John, title=President}
    // or if title was null....
    // Employee{id=1, lastNight=Smith, firstName=John}

A nice benefit to ToStringHelper is that I don’t have to worry about the property types. It all works whether I am adding a String, int, or any other type. If only the value is needed without a label, addValue() can be used. Another benefit of using the utilities provided by Guava is the addition of helper methods. ToStringHelper gives us the option of omitting the values that are null with the omitNullValues() method. Of course, you didn’t need me to give you a redundant sentence describing what the method provides.

Now it’s hashCode() time:

public int hashCode() {
    int hash = 1;
    int prime = 31;
    hash = hash * prime + (int) getId();
    hash = hash * prime
    + (getLastName() == null ? 0 : getLastName().hashCode());
    hash = hash * prime
    + (getFirstName() == null ? 0 : getFirstName().hashCode());
    hash = hash * prime
    + (getTitle() == null ? 0 : getTitle().hashCode());
    return hash;

Again, not too bad. If computing the hash code is always the same for every implementation, why keep writing it?

public int hashCode() {
    return Objects.hashCode(getId(), getLastName(), getFirstName(), getTitle());

Again, wasn’t that much more enjoyable to read and write? Guava allows the developer to focus on the parts that matter and not on the code that can be templated out. It is hard to get it wrong with Guava.

Comparing objects is similar to how Object.equals() and Object.hashCode() works. It is just as painful and verbose. So, not surprisingly, Guava can help us here too. Implementing Comparable.compareTo() is necessary to support ordering and has bigger consequences if not implemented correctly. Implementing the method correctly so that properties such as transitive is upheld adds to the complexity. So why not let Guava help us out?

The traditional compareTo():

public int compareTo(final Employee otherEmployee) {
    int compareResult =, otherEmployee.getId());
    if (compareResult != 0) {
        return compareResult ;
    compareResult = getFirstName().compareTo(otherEmployee.getFirstName());
    if (compareResult != 0) {
        return compareResult ;
    compareResult = getLastName().compareTo(otherEmployee.getLastName());
    if (compareResult != 0) {
        return compareResult ;
    return getTitle().compareTo(otherEmployee.getTitle());

With Guava’s Help:

public int compareTo(final Employee otherEmployee) {
    return ComparisonChain.start()
    .compare(this.getId(), otherEmployee.getId())
    .compare(this.getFirstName(), otherEmployee.getFirstName())
    .compare(this.getLastName(), otherEmployee.getLastName())
    .compare(this.getTitle(), otherEmployee.getTitle())

Just like all the Objects utility methods, Guava makes implementing the compareTo() method a breeze. The only thing the developer needs to remember is to add all the properties needing to compare and to place the properties with the highest chance of differing first in the comparing chain. The comparison gets started with start() and ends with result(). Each compare() is called, until a non-zero result is found. This is why it is still important to place the property with the highest chance of difference first. As with the other utility methods, all the fluff is removed to provide a concise, easy to read, and painless method. It is harder to make an accidental typo that the compiler won’t catch. If you do accidently swap inputs, like .compare(otherEmployee.getId(), this.getId(), it will be pretty visible within the chain.

Guava’s additional comparison utilities provided by the Ordering can be used with, T, Comparator). Ordering.usingToString() can be passed in instead of using natural ordering for a basis of comparison. Ordering provides additional capabilities to manipulate collections and examine values. It is handy to have a isOrdered() method to check if an Iterable is already ordered. Since it is a good idea to not change lists you don’t own, sortedCopy() and immutableSortedCopy() are provided to perform as their name suggests; to return a sorted copy of the list.

Avoiding Nulls With Optional

Nulls have caused so many headaches. No one wants to see the dreaded NullPointerException or have another null check condition in an if statement. I’ve been bitten by the null bug more times that I would like to admit. So, what can we do about it?

Since a null value is ambiguous and can be often misunderstood, it is frequently recommended not to introduce or use nulls in application code. One way to indicate an absence of a value is to use Guava’s Optional. Not only does Optional give you something to return instead of null to indicate an absence of a value, it also forces the developer to think about the case where a value doesn’t exist whereas a null is easily forgettable. Guava uses this null adverse philosophy in many of its utilities. When you read the documentation, you’ll find that many methods will throw an exception when a null is encountered. Most of the time, there is very little use of trying to continue processing a null value. In other cases, knowing something doesn’t exist is of worth and may dictate a different processing course. Guava can assist with both scenarios. According to Guava’s Wiki, there are facilities to help ease the use of nulls and to help avoid them altogether. As we all know, unless we have control of all the code in the application, we always must be aware of the possibility of a value being null.

Guava provides the Optional<T> class to replace a nullable reference with a non-null value. It also provides a little different way of describing T. A value is either present or absent. As you can guess, a null reference is absent. It helps deal with null in a more descriptive manner. and bringings it to the front and center of the developer’s focus. Optional<T> makes it easier and more descriptive for the application code to alter processing course if a null is encountered.

To wrap an object that may be null and have a default value returned:

Optional<Long> possibleId = Optional.fromNullable(employee.getId());
possibleId.or(-1L);  //returns if value is not present
possibleId.isPresent(); // returns true
possibleId.get(); // returns Long
Optional<String> possibleTitle = Optional.of(employee.getTitle());  //throws exception if null

The hard part with incorporating the use of Optional<T> into a system is to decide where the boundaries of Optional <T>’s existence should be. If a query is performed to select an Employee by name, should a null be returned or an Optional<T>? Should all parameters be Optional<T> or none? Whether you are writing new code or refactoring existing code, Optional<T> applies better for returning values. It gives the advantage to the calling code to force it to think about a null scenario. The calling code has to actively unwrap the object instead of blindly assuming a value exists. I find it to be more descriptive (and reassuring) to ask if the returned object is present.

Optional<Employee> optionalEmployee = dao.findEmployeeByName(“Jones”);
if (optionalEmployee.isPresent()) {
    Employee employee = optionalEmployee.get();

Checking Parameters with Preconditions

What about parameters you ask? It is recommended to use Guava’s Preconditions like checkNotNull(T) for parameters. Preconditions are concise and descriptive whereas other common frameworks are a little too brief. As stated before, notNull(T) is ambiguous and doesn’t return the object passed in. Importing the Precondition methods statically is recommended and adds to the ease of readability. Preconditions will throw an exception if its check fails. If a parameter is in a state the application code can’t handle, then there isn’t a reason to continue processing.

It is also handy to use checkNotNull(T) in a constructor since it returns T if T is present. Being able to do this in one line helps the readability of the constructor. It isn’t fun to add an if statement for each parameter to check for null.

Single line check:

this.firstName = checkNotNull(firstName);

Each Precondition utility is overloaded to add possible debugging information with the thrown exception. This is important so the logs will contain better information for trying to figure out what bad data is being passed in.

Precondition checks:

checkArgument(id > -1, "Expected id > -1, but id is %s", id);
checkArgument(id > -1, someErrorMessageObject);

Simplifying Exception Propagation with Throwables

Right or wrong, dealing with exceptions is almost an afterthought for most developers. Either an overly broad catch is created to catch all possible exceptions, or one catch for just a single specific exception, or a combination of the two. A specific exception may be added to be caught because the calling method API enforces it, and a broad exception type will be added as a catch-all. Most developers think about exception handling just enough to make the compiler happy. A new exception is added to be caught only when that exception was found to be thrown. The new exception type is usually when the code is in production.

try {
    //do something
    } catch (SpecificException se) {
    // I can handle this
    } catch (Throwable t) {
    // handle everything else

Throwables.propagateIfInstanceOf() gives the developer an option to have a default catch with a broad exception type. There may be cases where you only want to propagate a few specific exceptions from this single catch block. Throwables.propagateIfInstanceOf() can help reduce the number of lines of code needed to accomplish this. The registered exception class is compared with the instance of the exception within propagateIfInstanceOf(), and will be thrown only if the exception instance is of that type. Throwables.propagate() can satisfy the compiler that one of the registered exceptions will be thrown.

public void foo() throws ApplicationException, FileNotFoundException  {
    try {
        //do some work;
        } catch (NullPointerException e) {
        // I can handle this;
        } catch (Throwable t) {
        Throwables.propagateIfInstanceOf(t, ApplicationException.class);
        Throwables.propagateIfInstanceOf(t, FileNotFoundException.class);
        throw Throwables.propagate(t);

Another handy utility offered to exceptions is for helping with the exception stack. By the time an exception is bubbled up to your code, it can get pretty messy and deep. Throwables offers getRootCause(), getCasualChain(), and getStackTraceAsString(). They don’t need much explaining for what they do – their name describes what they do. The nice thing about getting a list from getCasualChain() is you can use Guava’s Iterables utilities class to filter for specific exceptions to enhance debugging and logging. There can also be some value in dealing only with the root exception with getRootCause().

Guava: A Solution To My Problems

I can keep my card with the “lazy” (read: resourceful) programmers club with Google Guava. Guava keeps the focus on what is important in code development and takes away all the fluff. Implementing Object methods shouldn’t be complicated, painful, or hard to read. I can stay “lazy” because Guava does everything for me. I only have to make sure the correct properties are present and are being compared to other correct properties. My “lazy” eyes don’t want to read a lot of syntactic fluff. Inline comments are not needed due to Guava’s concise, but descriptive method names. Productivity goes up when methods such as equals() are made up of equal() statements. Time isn’t wasted on complicated if statements. An added benefit is that the Guava code has been very well tested.

These examples are just scratching the surface. Hopefully you can see the differences of what Guava gives you over hand-jamming Object methods, or even over other utility frameworks. Give it spin and see what you find. Here’s where to start.

— John Hoestje,


February 11, 2013 / Keyhole Software

Case Study – Implementing an HTML5/Javascript Enterprise Java Application

The Keyhole team has had recent engagements that involve applying HTML5 technology to create rich client web applications. In order to explore and validate application architecture design patterns and best practices in this area, and as they say “eat our own dogfood,” we have gone through the process to rewrite our existing internal timesheet tracking system.

The old, legacy system was implemented as Java portlets deployed to a Liferay portal. We implemented the new application with an HMTL5-based front end that accesses server side Java APIs. The timesheet system has common functionality that a typical enterprise application development team might encounter, so this blog will walk through how we built and architected this application.

Why HTML5/Javascript

Before we jump into the building process of this application, let’s first clarify what we actually mean by HTML5. HTML5 is a W3C specification that is being widely adopted by most, if not all, browser manufacturers. The features in this specification facilitate new elements: local storage, canvas, full CSS3 support, location APIs, new attributes, audio/video services, among others. These features will be important to enterprise development. An immediate benefit is the numerous JavaScript/CSS frameworks that enable the creation of a responsive rich user interface. Packaging all of these together, we simply refer to this as HTML5.

More than just the new HTML5 features, there are a couple of key reasons driving this architecture shift to JavaScript. First, browsers and JavaScript engines have been optimized for performance, so it’s feasible to deliver and process a plethora of JavaScript code. Bandwidth is another issue, along with connectivity. Most desktops can assume connectivity and access to bandwidth. However, mobile devices can’t always assume connectivity, and having to process HTML on the server can be make application sluggish.

Also, the biggest factor is that many JavaScript and CSS frameworks exist that allow a rich and device-responsive user interface to be created in an agile way. It’s feasible to use these frameworks to implement a single web user interface that is satisfactory for the desktop, tablets, and mobile devices.

Case Study Application

This newly-created web application is how Keyhole Software employees keep track of time worked. It is accessible from a desktop, tablet, and any mobile device with a browser. Additionally, we have implemented Android and iOS native applications that interact with the same JSON endpoints that the browser-based UI uses. There are more details about that further down the article.

Here are some use cases (accompanied by screen shots) for application functionality:

LDAP-based Authentication, using Atlassian Crowd:

LDAP-based Authentication

Once authenticated, users can create and edit time sheet entries by week. Administration roles have access to reporting features and timesheet approval options. Reports are defined as Eclipse BIRT reports.

Create and Edit Timesheets

User identification, as well as the applications and roles they have the authority to access, are stored in an LDAP repository. Administration roles can maintain groups or users.

User Identification / Roles

How We Built It

The application’s server side portions are built using JEE technology. The application is contained inside an Apache/Tomcat application server hosted on an Amazon EC2 instance. System data is stored in a MySQL relational database. The application’s user interface applies the following frameworks:

Front End

  • Bootstrap – an open source framework for the UI component, the look and feel of the application. It includes styling for typography, navigation and UI elements. One of the main reasons we chose Bootstrap as a viable solution was its responsive layout system, with which the user interface automatically scales to various devices (i.e. desktop, tablet, mobile).
  • jQuery, Require.js, and Backbone.js – JavaScript frameworks that provide Document Object Model (DOM) manipulation, dependency management and modularization, and Model View Controller and HTML templating support. And, all of them reside within a client browser.

Server Side

Application logic and data access to MySQL was implemented as server side Java endpoints. These endpoints were accessible via RESTful URLs via HTTP. Endpoints were created using Java technology and contained by the JEE application server. Plain old Java objects (POJO) modeled application data entities, which were mapped to the relational data source using an Object Relational (O/R) mapper.

The following server side frameworks were used:

  • khsSherpa – a POJO-based RESTful JSON endpoint framework. Provides a built-in authentication mechanism.
  • Spring IOC and Authentication – Spring’s dependency management was used to configure application logic and and data access layers. Spring Authentication was used to authenticate with LDAP. Additionally, the Spring LDAP template was used to access and update the LDAP repository.
  • JPA and Hibernate – a Java persistence architecture to map and interact with MySQL using Java JDBC drivers.

Development Environment

Development was performed using Spring STS and the Eclipse IDE with a Tomcat application server. A cloud-based development EC2 MySQL instance was used.

Application Architecture Shift

In order to generate a HTML user interface, traditional Java web application architectures typically build the MVC server side using either JSP or a server side HTML template framework. The legacy timesheet application used portlet JSPs. In our new application, a POJO-based application model is used to model the application data entities. These are mapped to a data source and persisted using the traditional DAO pattern. Services are designed to “service” the user interface. Controllers handle navigation and views render an HTML user interface. This is all performed server side by an application server. Here’s a picture:

Traditional Java Web Application Architecture

The architecture shift involves moving the server side MVC elements to the browser implemented in JavaScript, HTML, and CSS, using the frameworks previously mentioned. Here’s the new picture:

The Architecture Shift

Server Side Implementation

Server side components are implemented using Java JEE components and are deployed as a WAR file component to an application server. Application logic and data access to the timesheet system’s relational data store follows a layered application architecture.

Service/POJO/Data Access

Let’s start looking at the server side elements of the application architecture. Server side Services are implemented as Spring framework service classes. They provide an API for POJO models to be created, retrieved, updated, and deleted (CRUD). JPA is used as the persistence mechanism. Since this does not need to be “pluggable,” references to the JPA entity manager are directly done via Spring in the service. For the sake of correctness, the EntityManager reference is arguably a DAO. If we anticipated a data source change, we would have implemented a contract/interfaced DAO for plug-gability. We also used the Spring Data framework for some services that required more SQL queries. The agileness, some might call magic, of Spring Data’s ability to dynamically implement code is very agile. The service implementation for weekly timesheet entries is shown below:

public class EntityWeekService extends BaseServer {
	private EntityManager entityManager;

	private Collection<?> buildWeek(List<Object[]> results ) {
		List<Map<String , Object>> list = new ArrayList<Map<String,Object>>();
		for(Object[] o: results) {
			Map<String, Object> map = new HashMap<String, Object>();
			map.put("WeekName", o[0]);
			map.put("Year", o[1]);
			map.put("Week", o[2]);
			map.put("Hours", o[3]);
			map.put("Status", o[4]);

		return list;

	public Collection<?> getWeek(Status status) {
		Query query = entityManager.createNativeQuery(
				"SELECT * " +
				"FROM Entry " +
				"WHERE Entry.user_id = :user and Entry.client_id = :client and WEEK( = :week");

		query.setParameter("client", status.getClient().getId());
		query.setParameter("user", status.getUser().getId());
		query.setParameter("week", status.getWeek());

		List<Object[]> results = query.getResultList();

		List<Entry> list = new ArrayList<Entry>();
		for(Object[] o: results) {
			Entry entry= new Entry();
			entry.setId(((BigInteger) o[0]).longValue());
			entry.setDay((java.util.Date) o[1]);
			entry.setHours((Double) o[2]);
			entry.setNotes((String) o[3]);

		return list;

	public Collection<?> getMyWeek(User user, Client client) {
		Query query = entityManager.createNativeQuery(
				"SELECT " +
					"CONCAT(YEAR(day), '/', " +
					"WEEK(day)) AS week_name, " +
					"YEAR(day), WEEK(day), " +
					"SUM(hours), " +
					"status " +
				"FROM Entry LEFT JOIN Status on Entry.user_id = Status.user_id and Entry.client_id = Status.client_id and YEAR(day) = Status.year and WEEK(day) = Status.week " +
				"WHERE Entry.user_id = :user AND Entry.client_id = :client " +
				"GROUP BY week_name " +
				"ORDER BY YEAR(day) DESC, WEEK(day) DESC " +
				"LIMIT 8");
		query.setParameter("client", client.getId());
		query.setParameter("user", user.getId());
		List<Object[]> results = query.getResultList();

		return buildWeek(results);
	public Entry update(Entry entry) {

Here’s a service implementation responsible for retrieving and persisting Client data. This service references a Spring Data ClientRepository interface:

public class ClientService extends BaseServer {
    private ClientRepository repository;
    public Collection&>Client&< getMyClients() {
        return repository.findByActive(true, new Sort(Sort.Direction.ASC, "name"));
    public Collection&<Client&>getAllClients() {
        return repository.findAll(new Sort(Sort.Direction.ASC, "name"));
    public Client getById(long id) {
        return repository.findOne(id);
    public Client save(Client client) {

RESTful JSON/Endpoints

Service methods are accessed using a RESTful URL pattern and return JSON data payloads. This is accomplished using the open source framework khsSherpa. Endpoints are defined by creating Endpoint classes that are annotated with the khsSherpa framework annotations. Methods in the endpoint class can be annotated as with RESTful URL actions. The framework handles parameterization and serialization of object and arguments automatically. A partial endpoint implementation that fronts the weekly timesheet service is shown below with the RESTful URL action methods bolded:

public class EntryWeekEndpoint {
	private SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd");

	private EntityWeekService entityWeekService;

	private EntryService entryService;

	private ClientService clientService;

	private UserService userService;

	private StatusService statusService;

	@Action(mapping = "/service/my/week/client/{id}", method = MethodRequest.GET)
	public Collection<?> myClientWeeks(@Param("id") Long id) {
		return entityWeekService.getMyWeek(userService.findByUsername(SecurityContextHolder.getContext().getAuthentication().getName()), clientService.getById(id));

	<strong>@Action(mapping = "/service/my/week/client/{id}/times/start/{start}/end/{end}")</strong>
	public Collection<Entry> getWeekTimes(@Param("id") Long id, @Param("start") String start, @Param("end") String end) throws ParseException {
		return entryService.getBetween(userService.findByUsername(SecurityContextHolder.getContext().getAuthentication().getName()), clientService.getById(id), formatter.parse(start), formatter.parse(end));

Endpoints are accessed with the following following URLs:

This URL return employee timesheets for current time period in JSON format:


This URL returns employee timesheets for data range in JSON format:


LDAP Authentication

Employees are authenticated into the application against a Crowd LDAP user repository. This is accomplished using Spring Authentication frameworks and a LDAP template. The khsSherpa framework is integrated with Spring Authentication, and therefore only LDAP configuration context files are required. An example Spring context file is shown below:

	<security:ldap-server id="contextSource"
		manager-dn="cn=ARootUser,cn=Root DNs,cn=config"
		manager-password="<password>" />

	<bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate">
		<constructor-arg ref="contextSource" />

	<bean id="authenticatedVoter" class="" />
	<bean id="roleVoter" class="ws.directweb.timesheet.auth.CustomRoleVoter">
		<property name="rolePrefix" value="DW_" />

If authenticated, a random token is returned and must be used by subsequent requests. The token is associated with a timeout and lifetime period.

Authenticated URLs

The JSON endpoint framework provides a token-based authentication mechanism. Authenticated RESTful URLs must be accessed with a valid token and User ID, which are stored in the request header. The khsSherpa framework automatically authenticates token and ID against a pluggable token manager. Non-authenticated public endpoints can also be defined. In the case of our time sheet application, only authenticated URL’s are required. The endpoint framework allows endpoints to be secured across the board, using a property file, or at a endpoint/class level. Here’s a snippet of a non-authenticated endpoint:

@Endpoint(authenticated = false)
public class GroupEndpoint {
    private LdapDao dao;
    private GoogleService googleService;
    @Action(mapping = "/service/groups", method = MethodRequest.GET)
    public Collection<LdapGroup> getGroupss() {
        return dao.getGroups();

Unit Testing

To test server side Service/DAO implementations, we used JUnit. Also, since the endpoints are POJO-based, they can be tested without a server, using JUnit.

Continuous Build and Deploy

Git is the source repository we use for this internal development project. We use Github to host our repositories. Git works well as we have a distributed work force. To execute a Maven package goal, we have installed a Hudson server instance on an Amazon EC2 instance with a Hudson project configured. This goal compiles, builds, tests and produces a WAR file, which is then deployed to a test application server.

Client/Browser Side User Interface 100% Javascript

Here’s where the big architecture shift takes place. Instead of defining dynamic HTML with a server side MVC, a client side MVC is used. The entire front end is constructed with JavaScript and common frameworks are used to help with modularity and dependency management. This is essential as this architecture needs to support a large application (not just a website) with some cool widgets. And by itself, JavaScript does not have the necessary structure to support modularity.

JavaScript elements are contained within the JEE WAR component and can be defined in web content folders.

Modularity/Dependency Management

JavaScript does not provide any kind of modularity or dependency management support. To to fill this gap, the open source community developed the Require.js framework. Traditionally, JavaScript frameworks are defined in source files and loaded using the src=”<javascript>” attribute. This becomes unwieldy when lots of JavaScript files and modules are involved, along with possible collisions of the JavaScript namespace and inefficiencies when loading a module multiple times. Also, since there is no kind of import mechanism for dependencies, the Require framework allows modules to be defined that validate dependent libraries of modules. This is a necessary modularity mechanism for large applications.

MVC Pattern

The application user interface is constructed using the Backbone.js JavaScript MVC framework, which supports the separation of application navigation and logic from View implementation. The same design patterns and techniques are applied in the same way that server side MVC is applied to JSP, JSF, and template mechanisms. However, the key factor in all client side JavaScript is the providing of a rich and responsive user interface. Our timesheet system’s user interface is comprised of many view.js files. Views in Backbone.js parlance are actually controllers. They obtain a collection of JSON objects, define event handlers for UI elements (such as buttons), and render an HTML template.

As an example, here’s a UI snippet of weekly time sheets for an employee:

UI Snippet

The screen shot above shows weekly timesheets for an employee. This user interface is built using a Backbone View module, Collection module and HTML template. Collection modules retrieve and contain JSON model objects for the view. Here’s the contents of a Collection module implementation responsible for holding timesheet entry models for the UI snippet:

define(['backbone', './model.week'], function(Backbone, Model) {
	return Backbone.Collection.extend({
    model: Model,

A timesheet entry model implementation is shown below:

define(['backbone'], function(Backbone) {
	return Backbone.Model.extend({
		initialize: function(attributes, options) {
			if(attributes && attributes.code && attributes.code == 'ERROR') {
				throw attributes.message;

Here’s a snippet of the view controller module for the UI snippet. It’s only partially shown, but notice how the collection module object is created, and time sheet entry model objects are accessed and loaded with a RESTful URL. You can also see the require(…) function being used to pull in dependent modules.

require(['./timesheet/view.timesheet.client.week.time', 'model/collection.entry', 'util'], function(View, Collection, util) {
    var _collection = new Collection();
    _model.set('enties', _collection);
        url: '/sherpa/service/my/week/client/' + _this.$el.closest('li').attr('data-client') + '/times/start/' + _firstDay.format('YYYY-MM-DD') + '/end/' + _lastDay.format('YYYY-MM-DD'),
        //async: false,
        success: function() {
            var _view = new View({
                model: _model,

The view controller renders a template with dynamic HTML for the view. Notice in the example below how dynamic object values are accessed using <% %> tags:

<td colspan="<%= data.span? data.span:'4'%>" style="padding-bottom:0">
<table class="table time-table" style="margin-bottom: 0">
			_.each(moment.weekdaysShort, function(day, index) {
			<th data-key="<%= index %>"><%= day %></th>
			_.each(moment.weekdaysShort, function(day, index) {
					var _id = '';
					var _time = '-';
					var _notes = undefined;
					var _week = data.Week? data.Week:data.week;
					var _year = data.Year? data.Year:data.year;

					//var _day ={year: _year, week: _week});
					var _day = moment(new Date()).hours(0).minutes(0).seconds(0);
					_day.add('w', _week - _day.format('w'));

					var entry = null;
					data.enties.each(function(e) {
						_d = moment(e.get('day'), 'MMM D, YYYY').hours(0).minutes(0).seconds(0);

						if(_day.diff(_d, 'days') === 0) {
							entry = e
						if(entry) {
							_time = entry.get('hours');
							_id = entry.get('id');
							_notes = entry.get('notes');

				<div class="time uneditable-input" style="width:38px; background-color: #EEE; margin-left: auto; margin-right: auto; border: 1px solid #CCC; cursor: pointer; color: #555; margin-bottom: 0px;"><%= _time %></div>
				<input data-id="<%= _id %>" data-day="<%= _day.format('YYYY-MM-DD') %>" data-key="<%= index %>"
						class="time hide" type="text" value="<%= _time %>" style="cursor: pointer; margin-bottom: 3px;">
				<i class="icon-comment icon-<%= _notes? 'black':'white' %>" style="cursor: pointer;"></i>
				<span class="notes hide"><%= _notes? _notes:'' %></span>

HTML5 Role-based Access

Access to certain features of the timesheet application is determined by the authenticated user role. Roles are identified by the LDAP Group that the user is a member of. When an HTML template is rendered, an HTML5 data-role tag attribute is defined. It references a JavaScript function that determines if the user has access to the specified roles. The function calls a service side endpoint that returns valid roles for the user.

Features and data for the user interface are contained with <div> tags, so this where the data-role is applied. Users with the supplied role can only see elements within the <div>. The example HTML template below makes reporting capabilities visible to overall admins, and admins for the timesheet application.

div class="page-header">
  Keyhole Software <small>Timesheet</small>
<div class="btn-toolbar">
  <a href="#timesheet" class="btn btn-info btn-timsheet-page">Timesheet</a>
  <a data-secure="hasRole(['DW_ADMIN','DW_TIMESHEET_ADMIN'])" href="#timesheet/reports" class="btn btn-info btn-reports-page">Reports</a>
  <div data-secure="hasRole(['DW_ADMIN','DW_TIMESHEET_ADMIN'])" class="btn-group">
    <button class="btn dropdown-toggle btn btn-warning btn-admin-page" data-toggle="dropdown">Administration <span class="caret"></span></button>
    <ul class="dropdown-menu">
        <a href="#timesheet/admin/submitted">Submitted List</a>
        <a href="#timesheet/admin/approved">Approved List</a>


Timesheets reports are defined and formatted using the Eclipse BIRT report framework. BIRT reports were created using the report designer, and deployed with the WAR. A user interface is created to accept report parameters, and an endpoint is defined that will consume the parameters and return a PDF, produced by the BIRT reporting engine, which is embedded in the server side timesheet application WAR. Here’s an example of the report launch UI:

Screen Shot - Reports


Our goal with building this application was to validate that robust enterprise applications can be built successfully using HTML5/JavaScript, and related frameworks.

Interestingly, the architecture shift back to the client is reminiscent of the client architecture server days. However, the standardization of HTML5/browser compatibility with optimized performance makes this shift both feasible and desirable. The benefits of this shift include a responsive rich user experience, eliminated server side security attack vectors, the elimination of browser plugin technologies, and API-driven data access decoupling, separating the user interface from application logic and data access. Additionally, HTML5/Javascript has a very large knowledge base and adoption, so experienced developer resources are available.

— David Pitt,

Quick Links

Here are some quick links to the frameworks we used:

Front End

Server Side

February 4, 2013 / Keyhole Software

Introducing Business Intelligence Reporting to a Software System, a Jasper Reports How-to

Most software development teams spend all of their time and efforts gathering requirements, planning, testing, implementing and supporting large systems to be able to do the one thing they are all in business to do: make money!

Rightfully so, the core efforts of their business and development teams need to focus on making the software work for their business and their customers. What usually gets overlooked (and pushed to the back burner) is the value sitting in the data. The data collected and generated by software systems is invaluable to businesses. It can help them be more profitable. It can help them understand their customers better. Heck, it can even help them understand themselves better. But, how do you get to that data?

The idea of Business Intelligence brings all of this data to light. In theory, getting the right data in the hands of empowered people is when real positive change can occur. But where do you start? What software systems are available? What will work with your technology base and infrastructure? There are many choices in the market that are both free and paid.

The open source Jasper Reports framework has been around for years. It is a Java-based reporting engine that allows users to create visual reports based on any accessible data source. Being Java-based allows it to easily integrate into existing desktop or web-based Java systems.

I think another commonly-overlooked benefit of Jasper Reports is that because it is Java, businesses and organizations can use their existing development teams and current development processes to get reporting integrated quickly. The ramp-up time to learn the tool is minimized — a huge benefit.

The Reporting Platform

The folks at JasperSoft have a few offerings related to the reporting platform, but for this basic introduction, we are going to focus on only two of them.

The first piece is the Jasper Reports framework. It is the heart of the whole system. The framework can take a report, connect to a datasource (e.g. a relational database, an XML file, etc.), fill the report with data and then export it to a given format (HTML, XLS, DOC and CSV to name a few). This is the simplified description of what the framework can do.

The second piece is iReport. iReport is the desktop tool (available for both Mac and PC) that users use to create the reports themselves. It is built and designed around a WYSIWYG approach where elements are dragged and dropped into the body of the report. Users can move elements around in a pixel-by-pixel fashion to get the report to look exactly how they want it. It will be the job of the Jasper Reports framework to figure out how to get that look-and-feel achieved in PDF, HTML, Word, etc. All users have to do is design the report one time, in iReport, and the Jasper Reporting framework will do all of the rest.

Under the covers, the file(s) created by iReport are simply XML files (with a .jrxml file extension) meeting the Jasper Reports specification. You actually don’t need iReport to build a report, but without it, you would have to know the XML schema and build it by hand. iReport is a visual interface to building that XML, alleviating the users from ever having to know the specific details. But knowing them won’t ever hurt!


You can download iReport for free, since it is distributed under the AGPL license. Once downloaded and extracted (and a new blank report is created), you should see a view like this:


iReport can acquire data from many different sources. For this example, we’ll use a relational database and write an SQL query to get the data for our report. The first step is to configure iReport for this database connection.

  1. On the main toolbar, click on the Report Datasources button:
  2. In the Connections/Datasources window, click the “New” button on the right side.
  3. Select Database JDBC Connection as the connection type and click “Next.”
  4. You should now see a dialog like this:
  5. Fill out the fields as follows:
    • Name: Provide a name for this connection that makes sense to you.
    • JDBC Driver: Select the type of database/driver for your application from the dropdown list. If your driver type is missing, you can add it to the list by adding the driver’s jar file onto the iReport classpath and restarting.
    • JDBC URL: Fill out your server name and database name as necessary in the URL pattern.
    • Username: Enter the username to use for the connection.
    • Password: Enter the plain text password to use for the connection.
  6. Click the “Save Password” text box if you don’t want to have to retype your password every time you open iReport and connect to this database.
  7. Click the “Test” button to make sure your connection works properly.
  8. Click the “Save” button to finish.

You should now see your new connection displayed in the dropdown list to the right of the Report Datasources button. Whichever connection is selected in that list is the connection that iReport will use when you Preview your report. Changing databases is as simple as changing the selected item in that dropdown!

Creating and Editing Reports

There are three main areas of the editor to focus on:

  1. Report Inspector – This is a tree-like view of the structure of the report. You should see styles, parameters, variables, fields, scriptlets and then a list of “bands.”

    • Parameters: These are used to capture values from users and are accessible by the report to use in SQL queries, etc. One basic example of a parameter may be a date (or a date range) for which the user wants to see data.
    • Variables: Similar to writing cell-based formulas in Excel, Variables can calculate a value based on how they are setup. There are many calculation types, but the most common are Sum, Count, Average, etc. A few built-in Variables already exist, such as: the current page #, total number of pages, current row #.
    • Fields: If you are using an SQL statement to gather your data (the most common scenario), the Fields represent a list of columns coming back from the database. The Fields are what actually get displayed on your report at the end. Once you enter a query into the SQL window, iReport will generate Fields for you by connecting to the database and seeing what columns are returned by your query.
    • Bands: A band is a virtual container for Report Elements. The elements you drag and drop into a band are the things that will display data on your report. There are many bands available, but you can choose which bands are necessary for your report and add/delete them at will. Each band has its own purpose:
        • Title: Shows up once at the top of the first page of your report
        • Page Header/Footer: They show up at the top/bottom of each page of your report (the # of pages is usually determined by the number of rows in your resulting dataset).
        • Column Header/Footer: They show up before/after the data section of your report.
        • Detail: This is a special band that behaves differently than all other bands. The detail band is repeated for each and every row in your query’s result set. So if your query brings 100 rows back from your datasource, then the detail band will be printed 100 times in a row.
  2. Editor Area – This is where your report is displayed. You can see all the bands of your report and any elements you have dragged and dropped into them.
  3. Properties View – This view changes to show the properties of whatever element is selected in either the Report Inspector or in the Editor area. Here you can easily see the size (height and width), position (top and left distances from the edge of the band), font settings and more.

Now that you have iReport installed, configured and know the main areas to focus on, you can start building your report.

You should probably begin by entering your SQL statement into the report. To do this, click on the “Report Query” button on the toolbar:

In the “Report Query” window, type in your SQL statement under the “Report Query” tab.

Below your query, you should see a list of “Fields” that iReport generated based on the columns your query returns. When everything looks okay, click the “OK” button to save and close your query.

Look in the “Report Inspector” under the “Fields” node. Expanding that node should show you all the Fields you just saw in the Report Query window. Drag and drop one of these fields onto the “Detail” band of your report. You should now see something like this:

iReport just did two things for you:

  1. Created a Static Text element with the name of your field in it, and placed it into the Column Header band.
  2. Created a Text Field element for your field and placed it into the Detail band where you dropped it.

Make no mistake here, these are two separate and distinct elements. You can move them independently and edit or delete them independently.

Now, lets move them to the top left corner. You can do this by either dragging them with your mouse, or by clicking on one and editing its properties (setting the Top and Left values to zero).

Now repeat this process for other fields that you wish to display in your report. Here is what my report looks like now:

I took the liberty of renaming the column headers (double click on one, type a new value and hit “Enter” on your keyboard), bolding them (using the button on the toolbar) and positioning them so they are all next to each other. Things are starting to look better, but we have a lot of extra white space here. Lets set the height of our bands to match our elements so that everything gets tightened up.

We need to adjust both the column header and detail bands. Click each, one at a time, in the Report Inspector. With the band selected, go to the “Properties View” and set its height to 20. When done, right click on the Column Footer and delete it (we don’t need it for now). Also, delete the Page Header, Summary and Background bands. Set the height of each remaining band to 20 as well. You should have something like this now:

Our bands are nice and clean and we are displaying some fields across the width of the report. The only things left for our simple example are a Title and some Page Footer information. Drag and drop a Static Text element from the Palette into the “Title” band, position it to your liking, and enter the title of your report. Style the element by setting its font settings to your liking.

Next, go to the Palette and drag and drop the “Page X of Y” element to your page footer. Move the elements to whichever side of the band you like. Now you should see something like this:

Now, we are ready to run our report and see what it looks like. This part is easy, simply click the “Preview” button on the toolbar:

iReport will connect to your database, run your query, assemble your report and display it for you all in one step. Here is my report in preview mode:

If you notice, one of my columns had no data in it, so I see a list of “null” values. This isn’t very user friendly, so I’m going to ask iReport to help me. Toggle yourself back to Design mode by clicking the “Designer” button on the toolbar (two buttons to the left of the “Preview” button).

In the “Detail” band, click to select the element in the column with null values. In the “Properties View,” click the “Blank when null” checkbox. iReport will now leave that column empty instead of displaying the word “null” repeatedly. I have done this to two of my columns and now my report looks like this:

Things are looking much better, and for this example, we have what we need to proceed and get the integration going into our existing Java application.


Getting Jasper Reports into your existing Java application is easier than you think. Download and extract the Jasper Report framework and add the required jars to your project’s classpath.

Make a new folder in your codebase and copy the report’s .jasper file (created by iReport) into it.

Jasper Reports provides you some tools to load the report from a given location. Use the JRLoader object and select the method that suits you best. Here are a few common examples:

package com.keyholesoftware.example.jasperreports;


import net.sf.jasperreports.engine.*;
import net.sf.jasperreports.engine.util.*;

public class JasperReportLoader {

	 * Load a report from a specific (absolute) file path
	 * @return
	public JasperReport loadReportFromFile() {
		JasperReport result = null;

		try {
			File myReport = new File("/reports/myReport.jasper");
			result = (JasperReport) JRLoader.loadObject(myReport);
		} catch (JRException e) {

		return result;

	 * Load a report from the classpath
	 * @return
	public JasperReport loadReportFromInputStream() {
		JasperReport result = null;

		try {
			InputStream is = getClass().getResourceAsStream("myReport.jasper");
			result = (JasperReport) JRLoader.loadObject(is);
		} catch (JRException e) {

		return result;


There are many more ways to load your report, but this gives you a quick and dirty way. In the end, you just need an instance of a JasperReport object.

Once you have acquired that report, the next step is to fill it with data and generate a JasperPrint object. This object is what we need to export the report to a specific format (PDF, XLS, HTML, etc).

package com.keyholesoftware.example.jasperreports;

import java.sql.*;
import java.util.*;
import java.util.Date;

import net.sf.jasperreports.engine.*;

public class JasperReportService {

	public JasperPrint fillReport(JasperReport report) {
		JasperPrint result = null;

		try {
			// acquire a database connection for my report to use
			Connection myDatabaseConnection = DataSourceService.getConnection();

			// optionally pass some parameters to my report
			Map<String, Object> parameters = new HashMap<String, Object>();
			parameters.put("myParameter1", 1);
			parameters.put("myParameter2", "some value here");
			parameters.put("myParameter3", new Date());

			// Fill the report and generate a JasperPrint object
			result = JasperFillManager.fillReport(report, parameters, myDatabaseConnection);
		} catch (JRException e) {

		return result;


The example above shows how to combine a Map of parameters and a database connection to fill the report with data. Again, there are many options of how to fill your report, but this is a good basic and straightforward example. Look at the JasperFillManager to see other ways you can fill your report.

Now that we have a JasperPrint object created from our report, we can use it to export our report to a format of our liking. Here are a few common examples:

package com.keyholesoftware.example.jasperreports;


import net.sf.jasperreports.engine.*;
import net.sf.jasperreports.engine.export.*;

public class JasperReportExporter {

	public String exportToHTML(JasperPrint jasperPrint) {
		String result = null;

		try {
			JRHtmlExporter htmlExporter = new JRHtmlExporter();

			// setup the exporter
			htmlExporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint);
			OutputStream os = new ByteArrayOutputStream();
			htmlExporter.setParameter(JRExporterParameter.OUTPUT_STREAM, os);

			// do the work

			result = os.toString();
		} catch (JRException e) {

		return result;

	public byte[] exportToPDF(JasperPrint jasperPrint) {
		byte[] result = null;

		try {
			result = JasperExportManager.exportReportToPdf(jasperPrint);
		} catch (JRException e) {

		return result;

	public byte[] exportToXLS(JasperPrint jasperPrint) {
		ByteArrayOutputStream result = new ByteArrayOutputStream();

		try {
			JRXlsExporter xlsExporter = new JRXlsExporter();

			// setup the exporter
			xlsExporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint);
			xlsExporter.setParameter(JRExporterParameter.OUTPUT_STREAM, result);

			// set some optional parameters for this specific exporter
			xlsExporter.setParameter(JRXlsExporterParameter.IS_ONE_PAGE_PER_SHEET, Boolean.FALSE);
			xlsExporter.setParameter(JRXlsExporterParameter.IS_DETECT_CELL_TYPE, Boolean.TRUE);
			xlsExporter.setParameter(JRXlsExporterParameter.IS_WHITE_PAGE_BACKGROUND, Boolean.FALSE);
			xlsExporter.setParameter(JRXlsExporterParameter.IS_REMOVE_EMPTY_SPACE_BETWEEN_COLUMNS, Boolean.TRUE);
			xlsExporter.setParameter(JRXlsExporterParameter.IS_REMOVE_EMPTY_SPACE_BETWEEN_ROWS, Boolean.TRUE);

			// do the work
		} catch (JRException e) {

		return result.toByteArray();


There are many options for each export type, but again, this gives you some good basic examples of HTML, PDF and XLS formats. Now your code can take these results and do with them what you wish. Email them, write them to disk or you can take the exported HTML string and put it on a web page in your Java-based web application.

JasperReports is a very mature reporting framework with literally thousands of options and features. I have just touched the surface to show you how to get started with integrating a reporting framework into your system. Follow up blog posts will cover more advanced report writing topics, as well as advanced report execution procedures, performance tweaks and memory saving tips. Stay tuned!
Also, the examples featured in this article can be downloaded here:

— Adi Rosenblum,

January 28, 2013 / Keyhole Software

Mapping Shortest Routes Using a Graph Database

We often model interconnected data by cramming it in and out of table structures. Why don’t we simply model interconnected data as … interconnected data?

I recently wrote that there are several kinds of NoSQL database stores: key-value, column family, document-oriented, and graph database.  This article targets Neo4j, a Java-based graph DBMS. The open-ended problem domain of graph databases includes access control lists (ACLs), social networks, recommendation systems, and geo-spatial modeling.


In 2007, a colleague and I used Java with Oracle 9i to implement Dijkstra’s Algorithm. Our “MapQuest for Trains” application would route a rail train over various right-of-ways while minimizing cost. The cost was a function of distance, fuel surcharge, and obstacles. The task to route a train from Los Angeles to Chicago had a grotesquely long response time. Nobody wanted their applications deployed on our nodes because we spiked the servers!

Where did we go wrong? The USA rail map is a living instance of graph theory. We modeled an extensive network of nodes (stations) and connecting links (tracks) as node tables joined through link tables. Navigation through this model involved n-level joins. An RDMS hits the wall as it proceeds through 4th and 5th level joins. See the Neo4j in Action book in the references for benchmarks of a graph DBMS Neo4j against RDBMS MySQL. We tried sidestepping this deep join problem by working with the network in memory for the life of the application instance. The North American rail system is quite large. The application thrashed.

Graph Data Store

Fast forward to summer, 2012.  I stumbled onto Neo4j, a graph database. The literature described suitability of Neo4j for modeling social networks. It could rapidly answer queries such as, “How many of Judy’s friends, their friends, and their friends’ friends have read Little Women?” I could use Neo4j to create my own Facebook or Twitter. Think of a social network as an instance of graph theory. I wasn’t interested, but how about a do-over at routing trains through the graph that comprises the USA rail network?

Raw Data

I wanted to publish my work with no strings attached. I found an open 1996 National Transportation Atlas Database ZIP file of rail data. I found a flat node file and a link file within. Each record of the node file had an integer ID plus a pair of fields for latitude and longitude. Each record of the link file had a distance field along with the node IDs for the node pair it connected. Perfect!

Bulk Load

This data could directly map to a Neo4j database. See Figure 1. I conceived Bizarro World, where railroad stations would have graph nodes with integer “names.” Hey, it’s only a demo! Each station node would have a latitude property and a longitude property. A station-to-station connecting track becomes a graph link having a single node-to-node distance property. Any number of links could fan out from a given station node.

I wrote a command-line Eclipse Maven project to bulk-insert a file of nodes, index them, and insert the corresponding file of links – all into an embedded Neo4j database. Run time was 20 minutes on my MacBook Air! I discovered it has a fan! Neo4j is ACID transactional. I discovered a non-transactional bulk insert API that speeded the insertion by an order-of-magnitude.

Figure 1: Neo4j models graph theory structure directly

Google Earth

My goal was to implement a Dijkstra algorithm, or the faster A* algorithm to produce a shortest distance node-by-node path between any two of my integer-named rail stations.  See Figure 2 for the notion of a shortest or least-cost path between N1 and N7.

Figure 2: Shortest path from N1 to N7

Each node would contain the node’s latitude / longitude properties. I could walk the path to emit Google KML – meaning Keyhole Markup Language – not related to Keyhole Software. Google Earth will render a KML layer atop a map. The visual effect would be striking.

At this point I received a gift from Neo4j. The heart of my code is able to pass the buck to Neo4j to do the heavy lifting. See the gist, or a subset below:

Transaction tx = graphDb.beginTx();
try {
    Expander relExpander = Traversal.expanderForTypes(
                DomainConstants.RelTypes.DOMAIN_LINK, Direction.BOTH);

    relExpander.add(DomainConstants.RelTypes.DOMAIN_LINK, Direction.BOTH);

    PathFinder<WeightedPath> shortestPath = GraphAlgoFactory.aStar(relExpander,
                costEval, estimateEval);

    emitCoordinate(ps, shortestPath, nodeA, nodeB);


} finally {

The built-in A* algorithm picks a shortest route through the maze of nodes – ‘stations’ – to return an iterator, shortestPath, over those latitude / longitude points that form a shortest path from beginning to end.  I ran that iterator to create a KML stream of points suitable for display on Google Maps or Google Earth. Each node is connected to another by a link that has a distance property.

Additionally, I had to supply a “cost” callback function. The A* algorithm multiplies each examined link cost property – miles – by this functional value. My surrealistic Bizarro World implementation simply returns the value one. In a realistic railroad world, this function could factor in variables such as fuel cost, track usage charges, train type restrictions, or even regional embargos against travel, caused by events such as a Hurricane Sandy obliterating tracks. A cost factor for the latter route would be a huge number.

The algorithm produces “a” shortest route. There could be ties in total cost or distance.  Presumably, a smarter cost function would impose enough variance to eliminate those rare ties.

The user passes a “to” and “from” station integer on the command line.  The application quickly produces the route as a KML stream sent to Google Earth in a per-OS-dependent manner. The visual effect is a smooth zoom into a display, such as that of Figure 3. The zoom actually takes longer than its triggering route calculation. One can zoom down farther, usually finding satellite shots of trains in the cities.

Figure 3: Command-based route displayed on Google Earth

You can clone the project from GitHub here. Import it as a Maven project to Eclipse 3.7 Juno. Compiler level is Java 1.6. Use the as your guide.

Mobile Web

That was easy, but I prefer web apps, especially mobile web apps. How about a KML layer atop Google Maps?  I created another Maven project in Eclipse composed of the following function points:

  1. An initialization servlet that copies the Neo4j rail graph database from the class path to a writeable directory – so that Neo4j can write a lock value.
  2. A controller servlet that expects “from” and “to” integer station parameters, producing a KML file of the shortest from-to route.
  3. A single JQuery Mobile page that accepts “from” and “to” slider values, to pass the controller’s KML result stream to the Google Map API. It adds the KML as a layer, onto an embedded Google Map displayed on the single page. A reset button clears the route, unless you want to stack a series of KML layers.

See Figure 4 for an iPhone 4 screen capture. Try it now at I deployed its war file to a $0.00/hr Heroku dyno that idles after a period of disuse. You may suffer a one-minute wakeup response time at first. It’s snappy thereafter, until the site falls into disuse for an hour or more.

Figure 4: Mobile version of app embeds Google Maps

You can exercise, step 2, the controller servlet, here. Try other integer pairs from the range 1 … 133752. It will download a KML file to you. Double-click the file to invoke a Google Earth route display. I tested this part alone before I wrote the single HTML page.

You may clone the project from GitHub here. Carefully read the file. Your clone will not draw a route without blessing it with your own Google API key and the URL of your own public deployment.


Bizarro World data is outdated and incomplete, making it bizarre. US stations are not really named by integers. Real railroad data includes track ownership, junctions between rail company’s tracks, and industry standard station identifiers.  A production cost function would have access to extra link properties besides distance. I wanted this demo to be freely public. Association of American Railroads data from its Railinc subsidiary isn’t free. Neo4j could accommodate that real data.

I’ve barely touched Neo4j. It did the job without much learning by me. Of course Google brings impressive mapping to the table, but I needed a dynamic shortest path KML layer also.  Neo4j has a remote service REST API, but I used an embedded database. It has several graph traversal languages available, but I needed only the built-in Lucene index. I used it to locate “from” and “to” nodes to feed to the A* algorithm. The main use of API was to load and index the raw data. Even so, I used a non-transactional bulk insert to accelerate the insertion. At run-time I wrapped my meager API calls in a requisite transaction.

I’ve only exercised the CR of CRUD. I’ve certainly not tried to scale this demo.  This sounds negative, but I’m excited. I have a graph data store available in my architectural toolbox. It would be interesting to create an ACL authorization framework with Neo4j, or even relent, and dip a toe into modeling social networks. And did you ever wonder how Amazon seems to know exactly what to sell to you?

— Lou Mauget,


January 21, 2013 / Keyhole Software

Quick-to-Implement Custom Features in Google Maps

Have you ever used Google Maps to find the closest branch of your bank? Search for restaurants nearby? Get driving or walking directions? Find the best public transit route? Unless you’re reading this because you got it in a spam fax, it’s likely that you have. And, like you, most of your site’s users are at least familiar with the Google Maps interface and have used it for both productive and non-productive means.

It’s intuitive, easy-to-use, and very powerful from a developer’s standpoint as well. Google has gone to great lengths to ensure that developers can leverage the power of their Maps application and use as much or as little of it as they would like. They have provided very detailed API documentation, examples, and a community for developers to help coders use Google Maps.

I have been working with Google Maps and its APIs for several years, and had slightly taken that fact for granted. Recently, discussions with other developers showed me that many haven’t had the chance. They’ve either never had the opportunity, or the need to work with Google Maps, or have just done so in a very basic way. So, I’d like to take this opportunity to highlight some of the easier-to-implement features in Google Maps.

This isn’t a tutorial, as there are plenty of fine tutorials out there, but rather a small showcase of Maps and how to easily create a rich and interactive custom map for your web application.

Custom Map Markers

One of the best ways to make your map stand out from all of the others is to include some personal touches, such as using your company logo for a map marker.

The hardest part is creating an image to use. The image needs to be small enough to not obscure too much map, but not so small that it is hard to see. For reference, Google’s standard marker size is 24×37 pixels. Creating an image “shadow” will give it the 3D look that Google has for its markers. Adding these to your map is quick and easy once the images are complete. Each marker is created independently for the map, allowing every marker to have its own unique look, if desired.

Now you’ll want some coordinates to tell the map where to place your triumphant masterpiece. For that, we turn to Google’s Geocoding Services after capturing some information about the location. Google takes anything from a country name to a fully qualified street address. The results that you get back will be an array of location data. There is a wealth of data returned by this service, but for most applications, you’ll only care about the location coordinates which can be added as a property of a marker, telling it where it belongs on the map.

You can also use Geocoding to reverse the search and lookup an address by providing the coordinates.

Note: I am taking the first result in this example. Depending on your need, you may want to loop through all the results.

Google provides a way to include a static map image too. The call is very simple and returns an image you can either save or display directly in an <img> tag.

Personalized User Location

Perhaps you want to locate your user and display their location. This can be useful for a site or application targeted primarily at mobile users. A desktop or laptop user will not typically have GPS available, so the location services will use WiFi location or their IP Address instead.

Personally, I have found that only Google’s Chrome browser will consistently give this information. I’ve found that Firefox has been hit or miss and Internet Explorer versions 8 and lower don’t support the location services at all.

To enable this, two small functions and an “if” statement are needed. The condition in the “if” statement will test if the browser is able to provide location or not. Be prepared for not always having the user’s location when you’re adding code for user location.

Turn-by-Turn Directions

To tie this all together, let’s use Google’s Directions Service to provide turn-by-turn directions for your users. We may just be replicating the more basic features of getting directions directly from, but by doing so, we keep the users on your site longer — which is typically the goal of having them there in the first place.

The service is pretty easy-to-use. I’ve personally found that it is best to provide Google with coordinates if you can, rather than forcing the directions service to perform the Geolocation. If you want to display the turn-by-turn text, then just add an element on your page to display it:

Distance Matrix

The Google Distance Matrix is another great tool you may want to take advantage of. It allows you to look at different start or end points to determine which location is closest by distance or travel time. Sometimes, you just want to know straight-line distance between two known points. And for that we don’t need Google, just the latitude and longitude of the two places being compared.

This formula has come in extremely handy when I have had a site that has many known locations (such as company warehouses) that we have geocoded and stored coordinates in a database. In these situations, there are rarely small differences in straight line distance vs. travel time, meaning that no two locations are so close that one may be 5 miles closer, but take 5 minutes longer to travel to. Because of this, we can save on web calls to Google services and just figure out which known end point is closest to the starting point given to us.

Check it out:

Final Notes

Google allows a wide array of customization and control in for how their map-related services are used, how information is returned, and with the look and feel. I think we’ve covered what most users will want to do with their maps for basic functionality, but keep in mind that there is a great deal more:

You can designate “zones” on a map, highlight them with shading and borders, limit your maps to not pan out of a certain area, only search within a defined area, and display directions for walking or public transit. There are event listeners you can create, allowing the map or its objects to change based on user interaction, location, zoom level, or whatever you can imagine.

And, if you really wanted to take the time to replace all map tiles outside of your country, state, or county with images of David Hasselhoff attempting to eat a hamburger, you can. (Please don’t though.)

— by Scott Peters,

January 14, 2013 / Keyhole Software

Modularization in TypeScript

UPDATE: Check out the new GitHub project that accompanies this post: TypeScript Modularization Demo

In my last post, I introduced TypeScript, Microsoft’s new offering which layers static typing on top of JavaScript. Today I’d like to go further and discuss modularization and code organization.

TypeScript has the ability to take advantage of a pair of JavaScript modularization standards – CommonJS and Asynchronous Module Definition (AMD). These capabilities allow for projects to be organized in a manner similar to what a “mature,” traditional server-side OO language provides. This is particularly useful for the most likely candidates for TypeScript – large Web Applications.

Although TypeScript’s modularization is powerful, it requires some background knowledge of CommonJS and AMD, and has its pitfalls. I hope in this post to be able to provide some of that background knowledge and provide some helpful pointers in organizing your TypeScript project in a modular fashion. Particular attention is paid to the topic of importing plain JavaScript files in your TypeScript code.

On the State of Modularization in JavaScript

Organizing scripts in JavaScript-based web apps has historically been a thorny issue. With no native, built-in system for modularizing code into packages, applications tended to quickly devolve into Script Tag Hell – dependencies being resolved by remembering to put this script tag before that one, but not before the other! Add to this the difficulty of preventing pollution of the global namespace and you have strong incentives to structure your application in as few files as possible.

Over time, these factors tended to act as a gravitational force, pulling your code closer and closer together, until the entire application collapsed under its own weight into a single black hole of a JavaScript file before all but the most senior of developers cowered in fear.

Okay, so that’s a bit of a dramatization. But the fact is that organizing JavaScript projects by hand was extremely difficult, and often was simply left undone. Fortunately, a pair of specifications was created to address just this problem!

CommonJS and AMD

CommonJS and AMD provide mechanisms for loading JavaScript files as separate modules, giving you the power of organization needed in both server- and client-side JavaScript applications.

The primary differences between CommonJS and AMD are as follows:

  • CommonJS is synchronous in nature; AMD loads files asynchronously
  • CommonJS is primarily used on the server in conjunction with node; AMD is most useful in the browser and its most popular implementation is RequireJS.

If you’re looking for more info, a good overview of the two can be found here (note that you must use the left/right arrow keys to navigate between slides). So, let’s get started with a basic example of what the syntax for each of these looks like.


CommonJS uses three “magic” keywords – require, module, and exports. Here’s what a simple example of a CommonJS Module file might look like:

function sayHello(name) {
   alert("Hello, " + name);
exports.sayHello = sayHello;

Here we use the magical exports variable to hang our sayHello function off of. To use this module in another file, we use Require.


var greeter = require("greeter");

Simple enough, right? Only methods or variables which are attached to exports are available outside the file. At this time, note that both CommonJS and AMD modules follow the Singleton pattern – when you include a module the first time, the code in that file runs. The second time you include it, you get a reference to the object which was produced the first time.

The last keyword CommonJS uses is the module keyword, which provides some additional capabilities (read about them here), one of which is a module.exports property. This property is initially the same as the exports property and can be used to hang off public methods and variables, but it also allows you to manually define exactly what object gets returned when the current module is required. For instance:


function sayHello(name) {
    alert("Hello, " + name);

exports.sayHello = sayHello;
module.exports = "Hello World";


var greeter = require("greeter");
alert(greeter); //alerts "Hello World"

Here we see that although we set exports.sayHello to a function, we overrode the entire returned module by setting module.exports to the string “Hello World.”


Let’s now turn to AMD. AMD’s special keywords are define and require (it works differently than the require in CommonJS so keep that in mind). define is used to define new modules, and require is used to load them. Keeping in mind that the “A” stands for “Asynchronous,” we are not surprised that AMD’s syntax involves callback functions.


define([/*dependencies go here*/], function (/*and getting mapped to variables here*/) {
    return {
        sayHello: function(name) {
            alert("Hello, " + name);

And to load this module, we do so either in another define call, or dynamically with Require.


//define an "app" module
define(["greeter"], function (greeter) {
    var app = {
        initialize: function() {
    return app;


function initialize() {
    //dynamically load "greeter" without defining another module
    require(["greeter"], function (greeter) {

So, a little bit more verbose here, but the advantage is that we have the power to load modules asynchronously at runtime! Although Require provides build tools which let you compile your code into a single (or multiple) file, you are also free to load modules dynamically, on an as-need basis at runtime. Both of these options are very useful when working on large projects where performance is a concern.

I should mention now that one feature of AMD/RequireJS is the ability to load code which was written with CommonJS in mind.  Code like this expects an exports variable, which Require can provide if we write our code like this:


//define an "app" module
define(["greeter", "exports"], function (greeter, exports) {
    exports.initialize = function() {

Here all that we do is specify a dependency of “exports,” and then the resulting exports variable can be used just like you would in CommonJS. The reason I mention this alternative syntax is because this is the way that TypeScript actually compiles when compiling to AMD.

Modularization in TypeScript

Essentially, TypeScript just provides syntactic sugar which sits on top of JavaScript but looks and feels a lot like standard OO fare.  This extends to modularization as well. TypeScript allows you to define classes (which really just resolve to prototypes) and modules (namespacing), which can be exported and used in other files. TypeScript then compiles these modules into either CommonJS or AMD format. By default it uses CommonJS, but adding the compiler option –module amd allows you to compile AMD-compliant modules. Please note that in order for browser-based AMD TypeScript code to actually run in a browser, you must manually include RequireJS or another AMD implementation on the page.

I will spend most of the remainder of the article focusing on AMD compilation in the browser, since that is the most common and important use case for most developers, and also happens to be the trickiest.

In Visual Studio 2012, you can set up your project to use AMD by editing your project file’s “BeforeBuild” step to include the —module amd parameter, as described here. Note that if you are using AMD (i.e. in the browser) you must have loaded an AMD loader such as RequireJS. Otherwise, your project may compile, but it will fail to work in the browser.


Before we get into the details of modularization, let’s talk briefly about classes in TypeScript. Before proceeding, it might be a good time to brush up on your JavaScript prototypes-vs-classes, prototypical inheritance-vs-classical inheritance knowledge. If you’ve had a cursory look at TypeScript, you’ve seen that class in TypeScript works out to be a constructor function/prototype in JavaScript.


class Pony {
	bite() {

compiles to


var Pony = (function () {
    function Pony() { }
    Pony.prototype.bite = function () {
    return Pony;

Which is basically how TypeScript classes work – syntactic sugar that ultimately resolves to a plain old constructor function in JavaScript. The benefit of this, of course, is that it is considerably easier for a developer steeped in OO to understand this way of doing things than trying to wrap one’s mind around prototypes and constructor functions right off the bat.


To use our Pony class in another TypeScript file, we simply use the export keyword:


eexport class Pony {
	bite() {

Which with CommonJS compilation yields this:

pony.js (CommonJS compilation)

var Pony = (function () {
    function Pony() { }
    Pony.prototype.bite = function () {
    return Pony;
exports.Pony = Pony;

And with AMD compilation looks like this:

pony.js (AMD compilation)

define(["require", "exports"], function(require, exports) {
    var Pony = (function () {
        function Pony() { }
        Pony.prototype.bite = function () {
        return Pony;
    exports.Pony = Pony;

To user out Pony class, we first must import that file, like so:


import Pony = module("pony");
var pony = new Pony.Pony();

Implicit Modules, or when TypeScript is unwieldy

Wait, that seems a little bit different than we might have expected… Shouldn’t we be able to just say var pony = new Pony()? As it turns out, no. Let’s take a closer look at the compiled output above. Notice that, as mentioned previously, TypeScript uses exports.* to export functions and variables, rather than just returning the object or function desired. Because of this, our Pony constructor function, rather than itself being the exported object, is actually just a method off of the exported module! I like to call this surprising behavior Implicit Modules (my term, good luck googling it) – when exporting a class in a file, the file essentially becomes its own module which contains the class that you wanted to export.

In my opinion, what would have been more convenient would have been for TypeScript’s compilation to yield a module.exports-style assignment, which would have enabled us to import the Pony class directly, rather than just as a variable off of some sort of implicit Pony module or namespace.

While this is largely just a bit of a syntactical inconvenience, it can also make it difficult-to-impossible to port over JavaScript code which relies on module.exports, as someone who registered an issue with Microsoft complained. It remains to be seen whether a future modification to the language specification, or some compiler flag of some sort will make it possible to compile this way. As we will see later, when importing JavaScript libraries, this will cause a little bit of difficulty.

One thing that can be done is to use this deficiency as a feature. Rather than authoring a single class per file, it becomes perhaps more desirable to place a set of related classes in a single file. For example:


class Pony extends Animal {
	bite() {
		alert("I, " + + " have bitten you!");

class Animal {
	constructor(name:string) { = name;
var pony = new Pony("Bob");


import Animals = module("animals");
var pony = new Animals.Pony("Bob");

Inheritance, or why TypeScript is great

In this example, we see both an arguable weakness (implicit modules) and a great strength of TypeScript – classical inheritance! Being able to elegantly define classes which inherit from each other is invaluable in large-scale application development. I consider this one of the greatest strengths of the language. Of course, ultimately it just resolves to JavaScript, but achieving this capability in pure JavaScript is really ugly and nigh-inscrutable. I prefer scrutable. But back to modularization.

When dealing with TypeScript-only modules, the Import/Export system works more or less like a charm. When you want to start including JavaScript libraries, it gets a bit trickier.

Importing JavaScript

In TypeScript, there are two ways to “include” another source file – reference comments and import declarations. Let’s have a look at each.

Reference comments

Reference comments add a dependency on the source file specified. They are only used for compilation purposes, and can be used to provide IntelliSense for JavaScript files. They do NOT affect the compiled JS output. Here is an example:

/// <reference path="pony.ts"/>
var pony : Pony;  //compiler now recognizes Pony class.

This tells the compiler that pony.ts will be available at runtime. It does not actually import the code. This can be used if you are not using AMD or CommonJS and just have files included on the page via script tags.

Import declarations

If we want to load a file via AMD or CommonJS, we need to use an import declaration.

import myModule = module("pony");

This tells the compiler to load pony.ts via AMD or CommonJS. It does affect the output of the compiler.

Importing JavaScript

Let’s face it. Most of the libraries we will want to use are not written in TypeScript – they’re all in vanilla JavaScript. To use them in TypeScript, we’re going to have to do a little bit of porting. Ambient Declarations and Declaration Source Files will be our tools.

Ambient Declarations (declare var)

Ambient Declarations are used to define variables which will be available in JavaScript at runtime, but may not have originated as TypeScript files. This is done with the declare keyword.

As a simple example of how this could be used, let’s say your program is running in the browser, and you want to use the document variable. TypeScript doesn’t know that this variable exists, and if we start throwing document’s around, he’ll throw a nice compiler error. So we have to tell him, like this:

declare var document;

Simple enough, right? Since no typing information is associated with document, TypeScript will infer the any type, and won’t make any assumptions about the contents of document. But what if we want to have some typing information associated with a library we are porting in? Read on.

Declaration source files (*.d.ts)

Declaration source files are files with a special extension of *.d.ts. Inside these files, the declare keyword is implicit on all declarations. The purpose of these files is to provide some typing information for JavaScript libraries. For a simple example, let’s say we have an amazing AMD JavaScript utility library, util.js which we just have to have in our TypeScript project.

define([], function() {
   return {
       sayHello: function(name) {
           alert( "Hello, " + name );

If we wanted to write a TypeScript declaration file for it, we would write something like this:

export function sayHello(name:string): void;

Declaration source files stand in for the actual .js files in TypeScript-land. They do not compile to .js files, unlike their plain *.ts peers. One way you can think of it is *.d.ts files act as surrogates for their .js implementations, since plain .js files aren’t allowed in TypeScript-land. They simply describe their JavaScript implementations, and act as their representative. What this means is that now you can import JavaScript! Here’s how we would use our util library:

import util = module("util");

This import statement here uses AMD or CommonJS to load the util.js file, the same as if util.js had been the compiled output of a util.ts file. Our util.d.ts provides the compiler/IDE with the IntelliSense to know that a sayHello method exists. The big takeaway here: If you want to include a JavaScript file, you need to write a *.d.ts file.

Note here that we used an Import Declaration to include our .js file. If we had merely wanted to tell the TypeScript compiler that util would be available at runtime (meaning we already loaded it somewhere else, via script tag or RequireJS), we could have used a Reference Comment in conjunction with an Ambient Declaration. To do that, we would first need to change our declaration source file:

interface Util {
  sayHello(name:string): void;

This defines a Util interface which we can then use for compilation purposes.

///<reference path="util.d.ts"/>
declare var util: Util;

It turns out that lots of people in the open source community have been cranking out TypeScript interface definitions for some of the most popular JavaScript libraries, including jQuery, Backbone, Underscore, etc. You can find dozens of these on GitHub in the DefinitelyTyped project. The interface definition for jQuery can be found here.

How to be lazy

What if I don’t want to take the time to re-write my .js library in TypeScript, or carefully craft an exhaustive *.d.ts file? It’s not hard to get around. Back to our util.js example. Let’s say that we had another method, sayGoodbye, which we didn’t want to take the time to define in our util.d.ts file, because of our supreme laziness. Simple. Just define a single export function in util.d.ts (one line isn’t going to hurt you!). Then, when you want to use methods that TypeScript doesn’t know exist, just use an Ambient Declaration, like so:

import util = module("util");
declare var unchained:any;
unchained = util;

The magic here is the Ambient declaration of the unchained variable, which is of type any. The any type tells TypeScript not to worry about typing – this variable could be anything. Trust us.

Dynamically Importing Existing JavaScript Libraries – The Problem(s)

The trickiest part of all this is getting TypeScript to use AMD or CommonJS to import a JavaScript library or module, rather than just making it compile using a reference comment. There are two tricky components to this.

First, if you are using an interface definition like those found online, you can NOT use that file as your *.d.ts in an import declaration. The interface declaration is only used to provide compiler/Intellisense information to the TypeScript compiler – interfaces are different than classes. In order for you to import your external library like we did in our simple util.js example from earlier, you need a different sort of *.d.ts file – one which uses the export keyword.

Second, recall from earlier how TypeScript has what I like to call Implicit Modules when importing files? You can’t directly import a class – you import the file it is in and then get your class definition off the resulting module object. Well, this causes us some grief when it comes to importing JavaScript modules which don’t follow the exports.* pattern in their AMD implementation, since most of the time when you import a “class” (constructor function) in AMD, you expect the imported object to be the constructor function itself.

Static Solution – Exporting to global namespace

The first, and simplest way to use a JS library in TypeScript is to simply make a Reference Comment to the *.d.ts interface definition, make an Ambient Declaration for the variable, and then do the actually loading in your RequireJS configuration, exporting the library into the global namespace. This method is acceptable if you are talking about common core libraries such as jQuery or Backbone, but since it relies on the global namespace it is NOT a recommended solution generally. Here’s what this looks like, using jQuery as an example.

In our index.html, we start our application up by loading Require and pointing it at appConfig.js, a plain JavaScript file which sets up RequireJS and starts our application.


<script type="text/javascript" src="libs/require.js" data-main="appConfig.js"/>


    paths: {
        'jquery': 'libs/jquery-1.8.3.min'
    shim: {
        'jquery': {
            exports: '$'

require( ['app', 'jquery'], function(App, $) {


declare var $:JQueryStatic; //JQueryStatic is defined in jquery.d.ts

export class App {
    start() {
        $( "#content" ).html( "<h1>Hello World</h1>" );

The key here is  is that jQuery is loaded in a non-AMD, globally scoped fashion. In the shim-jquery-exports-$ section in the RequireJS configuration, you can see that the exports keyword tells Require that when “jQuery” is loaded, the result is exported into the global variable $. Then in our TypeScript file, we simply add a Reference Comment for our *d.ts interface definition and then make an Ambient Declaration saying that the variable $ will be available at runtime and is of the type JQueryStatic.

This is a great method for application-wide libraries like jQuery, but as I mentioned before, it is NOT advisable to use this willy-nilly due to its reliance on the global namespace as well as its load-everything-up-front approach, which may not be desirable in some larger applications. Also note that anytime you want to include a new JS library, you must change your application-level configuration, and this cannot be done (not easily at least) in TypeScript.

Dynamic Solution

So how do we use TypeScript to dynamically import JavaScript libraries? To get around the first problem mentioned above, what I like to do is to define two separate *.d.ts files: one containing the interface definition you probably pulled off the web, and another which exports a single variable of the type defined in the interface file. Let’s use jQuery as our example again. The jquery.d.ts definition defines a JQueryStatic interface. Lets rename our interface definition file to jquery-int.d.ts, and create a new jquery.d.ts that looks like this:


///<reference path="jquery-int.d.ts"/>
export var $:JQueryStatic;

This will allow TypeScript to compile if we import jQuery like below.


import JQuery = module( "libs/jquery" );
var $:JQueryStatic = JQuery.$;

export class App {
    start() {
        $( "#content" ).html( "<h1>Hello World</h1>" );

Now we are able to compile in TypeScript. However, let’s say that we have a fairly standard AMD-compliant loader file for jQuery, which might look something like this:


define( ["libs/jquery-1.8.3.min"], function() {
    return $;

Using an AMD “loader” file like this is a common way of modularizing non-AMD JavaScript libraries. The problem here though is although our jquery.js loader returns $ when imported, in TypeScript our import statement expects an object that has a property of $. This is the second problem I mentioned earlier. My workaround for this is to change my AMD loader file to make it use exports.* just like TypeScript does.

define(["exports", "libs/jquery-1.8.3.min"], function (exports) {
    exports.$ = $;

Now when we import our jquery.d.ts in TypeScript, we will have the results we expected: a nice exported module with a $ property that happens to conform to our JQueryStatic definition. After weeks of scouring the web, this is the best method that I have come up with for dynamically importing JavaScript libraries in TypeScript. Let’s review the steps:

Dynamic Strategy Summary

  1. Snag an interface definition (*.d.ts file) off the web, or create one yourself. Remember that you can always be lazy and fall back on the any type.
  2. Rename this interface definition file example-int.d.ts.
  3. Create a new example.d.ts file which exports a single variable of the type defined in your interface definition
  4. Create a “loader”-style file example.js and use exports.* to export the desired library.
  5. Where desired, simply import the example module and find your desired library as a property off of the imported module.

And that’s it! It is a bit involved and definitely more work that I would have liked. I am hoping that the maintainers of the TypeScript language someday soon add the ability to use module.exports like I mentioned earlier, but until then this sort of workaround seems to be the order of the day.


To summarize our findings, let’s turn now to an elegantly crafted outline:

  1. TypeScript provides handy-dandy static typing and IntelliSense on top of core JavaScript
  2. Modularization in core JavaScript is crappy to non-existent
  3. AMD and CommonJS are great and allow you to cleanly organize code
  4. TypeScript can do AMD or CommonJS
    1. –module flag let’s you switch between the two
    2. TypeScript inheritance is great (tangential but true)
    3. CommonJS module.exports is not allowed.
    4. TypeScript-to-TypeScript is clean and awesome (with the exception of limitations from c.)
    5. JavaScript-to-TypeScript
      1. Relies on Reference Comments, Import Declarations, Ambient Declarations, and Declaration Source Files
      2. Is fairly simple when using global namespace
      3. Is more involved when loading dynamically
        1. Problems
          1. interface definitions are not the same as exports.
          2. Cannot directly import because of Implicit Modules
        2. Solution: see “Strategy Summary” above

I spent a lot of time trying to figure out how to dynamically import JavaScript libraries, so I hope you find my strategy useful. Other similar sorts of strategies I looked at can be found here and here. The first one is a bit odd and I did not like that it requires (no pun intended) you to use the ugly AMD/Require syntax in TypeScript, and the second forces you to modify the interface definitions you find online or write your own class-based definitions.

I like my strategy because, although there is some overhead in writing AMD loader-files which use exports.*, you can leverage the host of online interface definitions while maintaining TypeScript’s clean and elegant syntax. Please recall though that this technique is not necessary if the library in question is loaded in the global namespace!

If you’ve found an even better way of importing JavaScript, I’d love to hear about it.

Brett Jones, (Also, please consider following me on Twitter – @brettjonesdev)

UPDATE: I have created a simple example of the two main techniques (static versus dynamic imports) I describe here. Check it out on GitHub: TypeScript Modularization Demo

January 7, 2013 / Keyhole Software

Introduction to TypeScript Language and Tooling

TypeScript, Microsoft’s new open source JavaScript derivative, brings static typing along with a number of conventional OOP features to the Wild West of JavaScript. Much like CoffeeScript, this syntactical cousin of Ruby and Python compiles to plain old JavaScript. However, unlike CoffeeScript, TypeScript is in fact a superset of the JavaScript language. What this means is that you can actually write vanilla JavaScript in TypeScript (which is cool).

Language Basics

Static Typing – Hello World

Here’s a simple example of TypeScript’s static typing:

function greet(name:string, times:number, el:HTMLElement) {
    var message: string = "";
    for (var i = 0; i < times; i++) {
        message += "Hello, " + name;
    el.innerHTML = message;
greet("Bob", 3, document.getElementById('content'));

If we try to pass our greet function parameters of the incorrect type, TypeScript won’t compile:

greet("Bob", "not a number", document.getElementById('content'));

Object Orientation

In my honest opinion, the greatest strength of TypeScript is its introduction of classic OO constructs like classes and inheritance. Granted, these sorts of things can be done in JavaScript’s prototypical system (see David Pitt’s blog post on the subject), but it tends to be more verbose, a bit confusing and far from elegant. In fact, it is so unwieldy that some popular libraries such as MooTools and Backbone provide their own OO abstractions to shield the common developer from the gritty realities of prototypical inheritance.

TypeScript provides the kind of OO goodies that developers over the past twenty years have come to expect. This can be very useful in helping ease developers coming from a conventional OO background into the wild and wonderful world of JavaScript and prototypical inheritance. Look at the following example of a simple class:

class Animal {
    name: string;
    constructor(name: string) { = name;
    sayHello() {
        alert("Hello, my name is " +;

This compiles to the following JavaScript:

var Animal = (function () {
    function Animal(name) { = name;
    Animal.prototype.sayHello = function () {
        alert("Hello, my name is " +;
    return Animal;

Here we can see that although TypeScript appears to follow a more traditional OOP paradigm, under the covers all it really does is add some syntactic sugar to JavaScript’s basic prototypical inheritance model.


Let’s look at a simple example of one of the most useful of all OO paradigms (and my personal favorite) – Inheritance.

class Animal {
    name: string;
    constructor(name: string) { = name;
    sayHello() {
        alert("Hello, my name is " +;

class Pony extends Animal {
    sayHello() {
        alert("and I am a pony!");

var pony: Pony = new Pony("George");

Pretty cool! I won’t show you what this compiles to here (it’s ugly), but if you’re curious, copy this into Microsoft’s online TypeScript Playground to get a better feel for how TypeScript compilation works.

Plain JavaScript in TypeScript

Although TypeScript allows you to use static typing, at any point in development you are free to fall back to writing vanilla JavaScript. For instance, our original greet method example from earlier could have been written in plain JS:

function greet(name, times, el) {
    var message = "";
    for (var i = 0; i < times; i++) {
        message += "Hello, " + name;
    el.innerHTML = message;

Modularization and multi-file

TypeScript provides built-in support for CommonJS and AMD modules. It is quite simple to import and export TypeScript files, just like you would in a server-side language. Importing JavaScript libraries is a bit trickier, but can still be done. I will cover more on this in a later blog post dedicated to the topic, so stay tuned!


TypeScript was created by Microsoft, so as you’d expect, the first major IDE to have support for it is Visual Studio 2012. They have a nice plugin that integrates the TypeScript compiler into the IDE. Note that this download includes the TypeScript compiler, which can be run from the command line, as well as the VS plugin (VS 2012 must already be installed when the plugin is installed in order for the IDE to include TypeScript support).

As far as support goes with other IDEs, the current landscape is a bit sparse. WebStorm (my personal favorite web IDE, created by the people who brought you IntelliJ) has support coming in version 6, currently only available in the Early Access Program. There does not seem to be a solid Eclipse plugin yet. There appear to be some offerings available for Sublime Text, emacs and vim.

At the moment, the clear frontrunner – as you might expect – is Visual Studio. Let’s look at what it has to offer.

Visual Studio 2012

You can create a new TypeScript project by selecting “HTML Application with TypeScript.”

This creates a basic project that includes a default.html that loads app.js, the compiled result of app.ts, the main TypeScript file:

Running in Browser

Building our solution uses the TypeScript compiler to generate an app.js output file. To run, we can select what browser we want to use. Like any level-headed developer, I choose Chrome.

Selecting this option opens up a new tab in Chrome running our app:


We have two options to debug in Chrome developer tools: debug the compiled JavaScript output, or use source maps to debug our TypeScript files. The first option is simple enough – we just open our developer tools, open up app.js and start setting breakpoints.

Source maps are a bit different. If you’re not familiar with the concept, you can read about it here. Basically, source maps map the compiled, possibly minified code that the browser sees onto the actual source code you wrote, with all its syntactic niceties. This, of course, makes debugging much easier. In TypeScript, adding the -sourcemap switch on compilation generates sourcemap information in the compiled output. Visual Studio adds this option by default, so we don’t have to worry about it here.

To use source maps in Chrome, you must enable the source maps option:

This gives us the option to load app.ts and view our TypeScript code:

These debugging capabilities make real TypeScript development much more practical.


One of the biggest benefits that comes with using TypeScript is the ability to use powerful IntelliSense. With plain JavaScript’s dynamic typing, it is quite difficult for IDEs to offer safe, clean and reliable IntelliSense capabilities (WebStorm does it the best, in my opinion). With TypeScript, you can rely on such useful operations as Renaming a method or class. This is something very hit-or-miss in pure JS, and will be a great boon when developing large-scale applications.

Takeaway/First Impressions

What to make of TypeScript? The reaction to TypeScript’s release has ranged from dismissive to enthusiastic. Naturally, opponents of static typing dismiss it as an unholy pollution of JavaScript’s type-agnostic purity, whereas fans of compiled languages have great praise for it. Those who are critical of Object Orientation generally will not be impressed, while adherents will be excited to finally have OO available to them in the browser.

My take: it depends on you and your use case.

If you are working on a simple, lightweight web application with no more than a handful of developers, the overhead of compilation and learning new language features will probably not be worth the cost in flexibility and speed of development. You won’t like it.

If, on the other hand, you are working on a large-scale Enterprise application with multiple teams and dozens of developers, TypeScript could very well be the tool that makes a JavaScript-based application feasible. There is a certain degree of safety and stability to be found in static typing, and OO certainly helps prevent repetitive coding tasks. If your developers are all JavaScript gurus with years of experience on the client, you probably don’t need TypeScript. If you have a gaggle of Java/C# guys, TypeScript could be huge for you.

So should you invest in TypeScript? Again, it depends on your team and your project.

I’ll add a final caveat – TypeScript is very new right now. It could be a big risk for an organization to take a leap with TypeScript this early in the game. It certainly helps that a giant like Microsoft is behind it, but it remains to be seen if TypeScript will be a long-lived platform. I would certainly advise caution at this point.

In my next post, I will be examining TypeScript’s modularization techniques and how existing JavaScript libraries can be leveraged, so stay tuned!

Brett Jones,

I would also be honored if you would follow me on Twitter – @brettjonesdev.

December 26, 2012 / Keyhole Software

JSF Components – A Quick Example Using ICEFaces

This is a continuation of my previous entry on component based java frameworks. In this post, I would like to give a couple of examples of the kinds of components that can be used to quickly bring up a user interface.

I would like to first reiterate that what you choose is going to depend on your needs for a particular project. JSF and component libraries are perfect for certain situations, but may not be a good choice at all for others. If you want to have a lot of control over the JavaScript, then this is not the solution. However, in some cases, component libraries can help you get your user interface up and running both quickly and effectively.

JavaScript can be very fun to work with, especially now in the time of HTML5. I am in no way encouraging anyone to stay away (go play with the new frameworks for JavaScript and you will be saying how much fun it can be too. Really!). Nor am I pushing IceFaces as the best component library; I’m using it as an example because I was recently working on a project where it was successfully used.


If you want to use ICEFaces, then you will need to go to the website and get the appropriate download. Open the PDF guide and step through the Eclipse setup (hint: if you’ve downloaded the files, be sure to select local content when doing the Eclipse install not the archive). You will need to pick a JSF implementation to use. I recommend using STS for easier setup. You can select from several JSF implementations. Or you can download your desired implementation manually and setup the library as described in the document.

For this demo, I am using Apache MyFaces. Also note that there are lots of ways to set up a JSF project – I’m using this one for simplicity.

Component Libraries

Just a quick reminder: JSF implementations offer up the basic HTML components while component libraries add the extra stuff to make UI development quick and easy.

For this example, I am using ICEFaces ACE Components. ICEFaces offers two sets of components: ICE components and ACE components. The ICEsoft website states that ICE components are primarily geared towards legacy browser support, specialized applications and application migration. The ICEsoft website describes ACE Components (Advanced components) as “ideally suited to ICEfaces projects that are able to leverage the power of modern browsers to provide a richer, more dynamic and responsive user-interface.”

Here we will use the following ACE Components: ace:dataTable, ace:tabSet, ace:menuBar, ace:menuItem and ace:tabPane.


This example application is far from complete. I am pulling partly from the freely available ICEFaces demo application as a base and creating parts of an admin console for a silly bingo game application. I’m viewing this as the first step to an administrative page where the a user can both create a game and edit existing games.

The facelets template (template.xhtml) is set up and imported into the index page. The template defines the header, body and divs. The index then defines the ui items using ui:define. ui:define is a JSF facelets templating tag that defines named content to be inserted into a template. The name attribute value must match that of a ui:insert tag in the target template for the named content to be included.

I have created an underlying class and a class. MenuBarBean is where the tableData list is populated. In this example, I am using static content inserted into the dataTable upon initialization for simplicity.

A Basic Menu Bar

In template.xhtml:

<div class="topMenu">
		<h:form id="form">
			<ace:menuBar id="#{menuBarBean.menuBarId}" autoSubmenuDisplay="true"
				<ui:insert name="menuItems" />

In index.xhtml:

<ui:define name="menuItems">
	<ace:menuItem value="Home" url="#" icon="ui-icon ui-icon-home"
		styleClass="menuBarItem" />
		<ace:submenu label="New Game" styleClass="menuBarItem">
			<ace:menuItem value="Create From Scratch" />
			<ace:menuItem value="Use Game Template" />
		<ace:submenu label="Predefied Games" url="#" styleClass="menuBarItem">
			<ace:menuItem value="Crazy Holiday Bingo" />

I’ve used the submenu option. The urls are currently undefined, however you can see how this would be linked up.

Alright, so let’s see what this looks like. You can see the three new menu items here and the submenu items created for the New Game selection.

Next we have the dataTable. Here are the contents of the dataTable.xhtml file, which is then used in the index.xhtml. The ui:composition tag is a templating tag that wraps content to be included in another Facelet.

In dataTable.xhtml:

ui:composition xmlns:ui=""
		<ace:dataTable rows="10" value="#{menuBarBean.tableData}" var="game"
			paginator="true" paginatorPosition="bottom">
			<ace:column id="id" headerText="Id">
				<h:outputText id="idCell" value="#{}" />
			<ace:column id="gameName" headerText="Game Name">
				<h:outputText id="gameNameCell" value="#{game.gameName}" />
			<ace:column id="creator" headerText="Creator">
				<h:outputText id="creatorCell" value="#{game.creator}" />

In template.xhtml:

<div class="contentBody">
	<h:form id="form2">
			<ui:insert name="tabItems" />

In index.xhtml:

<ui:define name="tabItems">
	<ace:tabPane label="Favorites">
		<ui:include src="WEB-INF/templates/dataTable.xhtml" />
	<ace:tabPane label="Still in Progress">
		<ui:include src="WEB-INF/templates/dataTable.xhtml" />

Here you can see the two new tabs that were created and the populated dataTable:

As you can see, we were able to quickly get important parts of our user interface up and running with some commonly-needed components.

For further information check out the following websites:

— Adrienne Gessler,

%d bloggers like this: