%htmlDTD; %globalDTD; %feedDTD; ]> Mathew's Thoughts

Mathew's Thoughts

Achieving Better Code Quality
February 5, 2011 12:51 AM

Revised a blog I wrote back in June of 2005 on achieving code quality. This is a republish with updated content - http://blogs.averconsulting.com/2005/06/04/code-quality.aspx


memcached
January 15, 2011 5:59 PM

Caching data in an in-memory cache is an approach used to speed up data access. memcached is one such key/value based distributed object caching system that works this magic for you.

I was about to write a blog about memcached and found this wonderful "story" - http://code.google.com/p/memcached/wiki/TutorialCachingStory . I love the way its been written. Enjoy the read

If you are wondering how to install memcached on Mac OS X, check out https://wincent.com/wiki/Installing_memcached_1.4.4_on_Mac_OS_X_10.6.2_Snow_Leopard . Its amazing how at the end you feel you need to be awarded a phd! I guess that shows I am not a UNIX/Linux geek

I did though install it with no sweat on my Ubuntu 10.10 Virtual Machine with one simple command -
>> sudo apt-get install memcached

Memcached runs as instances on 1+ machines. It is run outside of the JVM . memcached clients know where all the memcached servers are but the servers have no idea about other servers. You guessed right - no distributed caching. Frankly in most general cases distributed caching is not needed. memcached is self recovering - in that a server coming down does not affect the other servers. When the "sick" server comes back up it simply joins the club of servers.

Installing Ruby 1.9.2 and Rails 3 on Mac OSX
December 29, 2010 11:28 PM

For those struggling to get Ruby 1.9.2 and Rails 3 installed on Max OSX.

Installing the latest Ruby and Rails ended up quite an adventure. I wonder how many people just give up. Anyways the best way to install this was using Macports. Download and install Macports from http://www.macports.org/ .

If the installer complains that you have an older version of Apple's XCode tool set then register at Apple's site ... download and install the latest XCode. This is a big download so be patient.

Next follow the instructions at http://www.ruby-forum.com/topic/178659 (posted by Conrad Taylor) to install the Ruby 1.9.2 and Rails 3.

A few quick pointers...

Step 2) type in "sudo port install ..." for mysql and sqlite.

Step 5) type
sudo gem install kwatch-mysql-ruby --source=http://gems.github.com -- --with-mysql-config=/opt/local/lib/mysql5/bin/mysql_config

Step 7) type
  rails new testapp

(testapp could be any name)

Before you get carried away create a rails application and ensure that you can access it in the browser. Go beyond the hello world page and add a model/controller. Execute the basic stubs using scaffolding and ensure db connectivity to the default sqlite3 db.




Unit Testing with Mocks - EasyMock, JMock and Mockito
December 17, 2010 11:28 PM

The oxford dictionary defines mock as - "make a replica or imitation of something". That very much holds true in the case mock assisted unit testing.

My definition..."Mocking is the discipline of identifying the dependencies of the unit being tested and thereafter providing imitations of that dependency, such that the class being tested has no knowledge whether the dependency is real or an imitation.".

There is a distinction on what you really expect to test in a unit test - State verification vs Behavior verification. I would point you to an excellent article by Martin Fowler regarding the same - http://martinfowler.com/articles/mocksArentStubs.html

In the real world the difference between the two blur and sometimes for a good reason. You do not have all the time in the world to sit down and write the ideal set of unit tests for each. State verification typically checks to see if the state of objects is as expected after the tests have run. Behavior verification is about checking to see if certain methods were invoked (and how many times, and what arguments were passed ,etc).

Using Mock frameworks you can stub out what data you want returned from a call to a dependent class. And that is what is of interest to me. Imagine a service class calling a classic data access class. I want to test the service class but stub out the calls to the DAO. At the same time I want the DAO to return different sets of data so as to exercise my different paths in the service class.

In the Java sphere there exists a few strong frameworks to help the developer in mocking dependencies. In this blog I will cover examples of three frameworks - EasyMock, JMock and Mockito.

The frameworks typically provide the following features:
  • Mock both classes and interfaces (you cannot mock final methods or final classes).
  • Return your own data from calls to mock objects.
  • Allows you to mock an exception from the mock object.
  • Chain multiple mock calls.
  • Specify how many times a method must be called in a unit test.
  • EasyMock and Mockito support partial mocking.
Lets look at real examples. As usual the complete Eclipse project is zipped up and available at the very bottom.


EasyMock

EasyMock follows the following design paradigm:
  • Create the Mock
  • Connect the mock with the object being unit tested
  • Set up the expectation on the mock (which methods on the mock need to get invoked, how many times, etc). To set up the expectations you call methods on the mock and thats it. The mock object will record the facts so as to verify it later.
  • Replay mode. Hereafter the mock will keep track of invocations to itself and throw exceptions if
  • Execute the test
  • Verify the mock invocations.
    import static org.easymock.EasyMock.createControl;
import static org.easymock.EasyMock.expect;
import static org.easymock.EasyMock.replay;
import static org.easymock.EasyMock.verify;

private List vazips = new ArrayList();
private LookupDataServiceImpl svc;
private LookupDao mockdao;

public void setUp() {

vazips.add("20147");
vazips.add("20191");

// create the object to test
svc = new LookupDataServiceImpl();

// create the mock dao
mockdao = createControl().createMock(LookupDao.class);
svc.setLookupDao(mockdao);
}

public void test_NoExpectationsRecorded() {
// - no expectations are recorded
// - WILL FAIL the test since EasyMock requires you to
// - record the expectations

// - Switch to replay mode and run test
replay(mockdao);

// - invoke test
svc.getZipCodes("va");

// - verify WILL NOT GET CALLED...will fail in previous step
verify(mockdao);
}
In this test case I have not recorded any expectations. By default EasyMock mocks will then expect that no invocations can be made to it. This test case will fail with an error (entire stack trace not included)
java.lang.AssertionError: 
Unexpected method call getZipCodes("va"):

Now lets look at a happy path examples where an expectation is provided.
    public void test_CorrectExpectationIsRecorded() {
        // - One expectations are recorded
        mockdao.addZipCode("va", "11122");

        // - run test
        replay(mockdao);
        svc.addZipCode("va", "11122");

        // - verify
        verify(mockdao);
    }

This test case will pass since we have recorded one expectation and the test execution did invoke the expected method, which was verified in the call to verify.

Now lets try to stub out the data that is returned from our DAO object.
    public void test_VerifyReturnData() {
        // - One expectations are recorded
        expect(mockdao.getZipCodes("va")).andReturn(vazips);

        // - run test
        replay(mockdao);
        List<String> zipcodes = svc.getZipCodes("va");

        for (Iterator<String> iter = zipcodes.iterator(); iter.hasNext(); ) {
            System.out.println((String) iter.next());
        }

        // - verify
        verify(mockdao);
        assertTrue(zipcodes.size() == 3);
    }

Variable vazips holds some hardcoded stub data. We would like that a call to dao.getZipCodes with an argument of statecode=VA return us this test data. The way we do this is in expectation
expect(mockdao.getZipCodes("va")).andReturn(vazips);

To throw an exception from our DAO use:
expect(mockdao.getZipCodes("va")).andThrow(new RuntimeException("mock runtime exception"));

As you can quickly see there is some amount of value to mocking out dependencies. What you have to guard is an over-dependence on mocking. I have seen test cases that set up so many expectations that after a while it is hard to understand what they were really testing. If you only do behavior verification then I personally think you have not unit tested the code. In the end of the day everything is about business logic and data. Your unit tests should verify those. As in the last example we have used EasyMock to return some test data so that we can unit test different execution paths in our service.


JMock

JMock is very similar in that you set up expectations, then execute and finally verify.
   import org.jmock.Expectations;
import org.jmock.Mockery;
import org.jmock.integration.junit4.JMock;
import org.jmock.lib.legacy.ClassImposteriser;

   public void test_VerifyReturnData() {
        final String stateCode = "va";

        // - record your expectations here
        context.checking(new Expectations() {
            {
                oneOf(mockdao).getZipCodes(stateCode);
                will(returnValue(vazips));
            }
        });

        // - Execute test
        List<String> zipcodes = svc.getZipCodes("va");
        for (Iterator<String> iter = zipcodes.iterator(); iter.hasNext(); ) {
            System.out.println((String) iter.next());
        }

        // - verify
        context.assertIsSatisfied();
        Assert.assertTrue(zipcodes.size() == 3);
    }

If you had to throw exceptions from your mock then
       context.checking(new Expectations() {
            {
                oneOf(mockdao).addZipCode("12121", "ca");
                will(throwException(new RuntimeException("mock runtime exception")));
            }
        });
Mockito
Mockito is a relatively new framework. Where it differs from EasyMock and JMock is in the way it deals with expectations. In the case of EasyMock and JMock you have to record the expectations. Mockito does not do require you to do that. It lets you verify mock invocation verifications (of your choosing) AFTER the test is executed.

    import static org.mockito.Mockito.mock;
    import static org.mockito.Mockito.verify;
    import static org.mockito.Mockito.when;

    public void setUp() {
        vazips.add("20147");
        vazips.add("20191");

        // create the object to test
        svc = new LookupDataServiceImpl();

        // create the mock dao
        mockdao = mock(LookupDao.class);
        svc.setLookupDao(mockdao);
    }

    public void test_Mock() {
        // NOTE: Mockito does not have concept of expectations
        // you execute the test with the mock and after the test
        // validate any method behaviors you want
        // run test method

        svc.getZipCodes("VA");

        // verify
        verify(mockdao).getZipCodes("VA");
    }

In this case you connect your class to the mock object as before, then simply run your test. After the test is executed you verify that expected methods were called. In this case a call to getZipCodes with an argument of "VA". Change the argument to "CA" and the verification will fail.

Here is how you would stub data.
    public void test_VerifyReturnData() {
        // stubs the return values on the mock dao
        when(mockdao.getZipCodes("va")).thenReturn(vazips);

        // run test method
        List<String> zipcodes = svc.getZipCodes("va");
        System.out.println(zipcodes.size());
        for (Iterator<String> iter = zipcodes.iterator(); iter.hasNext(); ) {
            System.out.println((String) iter.next());
        }

        // verify
        verify(mockdao).getZipCodes("va");
        assertTrue(zipcodes.size() > 0);
    }

Finally if you were to mock exceptions..
when(mockdao.getZipCodes("ca")).thenThrow(new RuntimeException("mock runtime exception"));

Each has its own advantages but none so ground breaking that one rules over the other. Use what you are comfortable with.

Click here to download the zip containing the source code and eclipse project. Please download libraries from:  http://www.easymock.org , http://www.jmock.org , http://www.mockito.org

Kanban For Software Development
December 2, 2010 7:03 AM

There are many ways to develop software using Agile techniques. Kanban is a recent entry into this and has some interesting dynamics going for it. Its great to see new ideas from across the industry being applied to software development. Not sure if Kanban is the final answer though - save that for my last paragraph.

In SCRUM we break down a release into small sprints. Each sprint begins with planning and ends with a demo and retrospective. That sounds reasonable until you start facing some challenges.
  • Teams struggle to fit development, system and acceptance testing into a single sprint (1-3 weeks). Especially true in large organizations that are struggling to implement agile in a silo'ed and political environment.
  • Some folks who are really good at what they do, finish up tasks earlier and may have to face idle time (sometimes they can help the laggards).
  • Folks who are new or just slow can put an entire sprint into jeopardy.
  • Team dynamics cannot be predicted. Early planning estimates can go completely haywire. No team remains static and each person in a team brings a different level of expertise to the table. Thus making it harder to predict estimates.
  • What if you have a development team that speeds ahead and is done halfway through the sprint. They have to wait until the next sprint planning to figure out what needs to be done. Or they pick additional tasks from the backlog and hope they can stay ahead. But now the testing team has more work than they anticipated.
  • What if the testing team is done and is waiting for additional tasks which they will only get at the end of the next sprint.
Now all of this can be addressed in SCRUM with some degree of success. None of these are show-stoppers to me for using SCRUM. But it begs the question - is there a more natural way to do development.

My Ignorable Rant: Those who go about speaking about Agile w/o ever having written any significant amount of code just do not get it. Writing good software is hard and is unpredictable. You cannot box someone to 1-2 weeks and have a cookie-cutter approach to development. Development is an art that is perfected over years - alas often not seen that way anymore. Writing code is not about taking things off of the shelf and assembling as you go. We have not reached that stage yet. There is a lot of thought and rigor that needs to be put in to build good code.

Software development with Kanban attempts to break out of the current model, breathe some fresh air into all of us and yet stay true to Agile principles (see the AgileManifesto).

Kanban (Japanese for Visual Cards) in software development is inspired from Lean principles and the Toyota Production System. Avoid waste. Kanban specifies creating a set of states (vertical columns on a board) - each representing a certain state of development. A work item flows through the states from left to right. Work items can go back into the previous states if needed. Lets define a few states for us:

Backlog | Story Queue | In Progress | System Testing | UAT Testing | Deployment | In Production

I have added quite a few states here to depict a slightly more granular level of detail. Now the only thing Kanban requires you to define is the WIP or the Work In Progress Limit. The WIP Limit is a maximum count of work items that can be allocated into any of your states. You can also define a different WIP Limit for each state. This can, for example, reflect ground realities such as skill gaps in teams and simple resource constraints. For example you have 5 developers and only 3 testers in system testing. So you can decide that at any given time only 10 work items can exist in the in-progress state and only 6 in the system testing state. This allows for some flexibility in case someone finishes a task earlier and needs something else to work on.

The backlog contains list of things to be implemented. The story queue is a prioritized set of stories from the backlog. The backlog is only an unsorted holding area. Within each state we can further subdivide into horizontal sections to divide say those items that are being worked on vs. others that have not yet started.

Work items progress all they way from left to right. As the WIP limit is reached no more items can be added and the reverse holds true when the WIP goes under the WIP Limit for that state. If the next state is full then the folks working in the current state could lend a helping hand to clear their queue. Developers can put on tester hats for a few days to help clear out the testing state and reduce the WIP there. This level of collaboration is critical otherwise folks can be sitting idle.

Gone are the sprints or the definite structure that SCRUM gives us. Its all about getting work done from a prioritized work queue and how much work capacity exists within the team.

Now this may remind you of a pipeline process or an assembly line. And that's exactly where the similarity ends since developing software is way different from manufacturing on assembly lines (see my earlier rant). But the underlying principle of avoiding waste holds true.

I do think that successful application of Kanban in software development projects require a higher level of team collaboration and management support as compared to say SCRUM. Teams have to change the way they work together. As mentioned earlier if the testing team is in a crunch, the development team jumps in to help out a little. Thereby allowing the assembly line to keep moving. Players throughout the states need to realize that work can come in anytime but will never exceed their current capacity.

So does this mean we have to choose from SCRUM or Kanban. As always - it depends on your situation.

In the beginning I said "Not sure if Kanban is the final answer though - save that for my last paragraph." Why did I say that. If you have not read my rant, go back and read it. Thats why.

GIT for Version Control
November 3, 2010 10:01 PM

I generally do not get into version control wars. Working in large firms often means you are told what to use. It is less often the case that you get to choose. A whole ecosystem is then stood up around version control - people to support it, specialized hardware, processes, separation of duties and what have you. All for a good reason, but developer creativity falls over time.

Try doing this where you work (only if you are in a large regulated firm). Get the code from your repository.....now go home, throw the VPN away. What no connection to the mothership! Yes no connection. Now go through the script below.
  • Assume you are on version 2.5 of your application. You would like to see the difference between certain files in 2.5 against an old revision 1.2.
  • You then decide you want to add a new feature. So you decide to branch (branch-1). Some of us like branches. And why not - why mess around with the main code.
  • Add some new code.
  • Check that in.
  • Damn I want to try out a different way of doing this, but don't want to mess with this branch - again.
  • Switch to mainline and create another branch (branch-2).
  • On branch-2 I try out my new approach. Got it working. Check it in.
  • After some thought I decide that branch-2 is the way to go.
  • Delete branch-1.
  • Then merge branch-2 back into mainline.
  • Finally delete the temporary branch-2.

Can you do that where you work in a matter of minutes? I bet you cannot. Your CM team would be throwing fits, your requests for branching would send red signals up the corporate chain.

Well let me tell you that the script above is possible with Git - the open source distributed version control system (DVCS). This is not about which version control is better. Read about it, if you like Git great otherwise move on. Mercurial SCM is another open source DVCS and its usage is very similar to git. Instead  of using the command "git" you use "hg". I cover Git here.

Git begins with the notion that all you developers are equally powerful - what a novel idea! When you bring down code from say a central location - you bring down everything, including history. You can cut the network cord now and you can perform the entire script above.

If your intention was instead to check in a project into GIT then run commands...
>> git init
>> git add
>> git commit -m "initial checkin"

If your intention instead was to check out an existing project from a central source - run commands...
>> git clone <url>

Either way you can now start working on this mainline project code. To start either tracking new files or update existing ones you use the command
>> git add [filename] [filename]

The "add" command simply stages the file to a local staging area. When you are ready to finally check in your changes, run command
>> git commit -m "your message"

The "commit" command creates a snapshot containing all the files you previously added. You can use the git "status" command to see what changed since the last check in.

Now realize that these commits are still being done to your local Git repository. Your code base is fully distributed (in its entirety) across all your users. Each user has the power to run our test script in complete isolation from others and no dependency on a central server.

Now if you had to branch, you would run command
>> git branch <branchname>

To switch to that branch run
>> git checkout branchname

Add a few files, run the add and then the commit command. Now switch to mainline by running command
>> git checkout master

"master" is the name representing the mainline.

If you check the directory now, you will not see the files you added to the branch. That work is kept isolated with the branch. Switch back to the branch and you should see the new files. Git takes care of synchronzing the contents of your folder with what your current branch is. No maintaining multiple directories. Elegant and useful.

Finally you decide to merge the work in the branch into the mainline and then delete the temporary branch.

If you are unsure which branch you are on run
>> git branch

This should list all branches and the master  - with an asterix preceding your active branch.

Switch to mainline if you are not already there
>> git checkout master

Run merge
>> git merge test

You should now see the new files have been merged into mainline. Finally delete the temporary branch
>> git branch -d test

Git gives you complete revision history all the way back to the first commit and also gives you the ability to perform inexpensive branching locally to work on different work items. Each commit keeps track of which files were modified/added during the commit as a changeset. This changeset can then be reverted using the unique id assigned to each commit.

Finally the question in your mind. How do I share code with other folks. You use the "fetch" and "pull" commands to synchronize with a remote server and bring down the latest changes. To upload changes you perform a "push" command.

Hopefully I have at least created a little interest in you for Git or distributed version control system (DVCS) in general.

Architecture & Enterprise Architecture
July 24, 2010 8:19 AM

What does Enterprise Architecture really mean? Why does a company need to have a strong Enterprise Architecture?

Depending on who is reading - each person has a different perspective on what Enterprise Architecture.
  • Hands-down technical person's view : Ivory tower, occasionally useful but mostly a pain, where all the "talkers" end up , and so on.
  • Management: I have a deadline , get out of my way.
  • Sr Mgmt - Hmmm. Do I need that. Actually I know what that is - let me tell you (but yikes I have never done that before)
  • Operations/Support - If they existed I would not have this mess
  • Business - I need it now. Whats all this waste in adding a simple MS Access application on my desktop to create quarterly financial reports.
  • and so on
Enterprise Architecture (EA) in most companies is often non-existent or grossly inadequate.

Before we begin to describe Enterprise Architecture, lets look at the two words - "Enterprise" & "Architecture". Rather than me making up new definitions, lets use definitions/descriptions that some leading industry groups have come up with.

TOGAF defines "enterprise" as any collection of organizations that has a common set of goals. For example, an enterprise could be a government agency, a whole corporation, a division of a corporation, a single department, or a chain of geographically distant organizations linked together by common ownership.

The definition of an architecture used in ANSI/IEEE Std 1471-2000 is:
"The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution."

Take a medium to large system you have worked on - break it up into its components, study the relationships between them, relate them to the business/technical environment in which they execute and finally whether you did this formally or not there will be some design patterns and themes that stand out.

If you are a small company with a handful of applications you may not invest in an elaborate EA practice - yet. But even here I would argue you have the workings of an EA group. Otherwise you would not survive. Its just that you did not call it out as EA. If you happen to be a large company with 1000s of applications of all size - here is where an effective EA will lay the foundations for a more competitive company.

What does the EA do? First things first. EA is not about only technology. Enterprise Architecture should be driven by some business needs that add value to the company's bottom-line. EA includes Business, Data, Applications, Technology and Services architecture. If you look at TOGAF9 it definitely lays out all of those domains - except Services. I believe Services architecture should be called out separately. The science of implementing an effective SOA is quite different from traditional silo'ed applications that are walled off from the rest.

Given this context let me try and answer  'What does the EA do?'-
"An effective EA practice should be responsible for laying out and governing a measurable path to execution of a company's business strategy; aligning with the operating plan".

Think about it for a moment. Every successful company has a clearly laid out business strategy and an operating plan to execute that strategy. Example: Company ABC Inc is in the business of creating pencils and crayons for school-age kids. The strategy is to get into the top 3 firms in their business within the next 3-5 years by focusing on elementary grade school kids. They should lay out an operating mode that describes the time-frame and the how they will execute that strategy.

ABC Inc would need to hire the right kind of people, maybe do some restructuring to become more lean and reduce waste, spend more time in studying the market, focus on targeted advertising, sponsor little league baseball games to en-grain their brand name among kids and so on.

IT plays an important element of the EA but its not the be all and end all. If you go back to the start of the article where I list how each person sees Architecture - in TOGAF9 these are views that are seen from different perspectives or viewpoint. Each viewpoint only sees or cares about part of the picture. An EA practice should maintain the bigger picture and ensure that nothing is getting lost.

Mule ESB
June 28, 2010 9:53 PM

This is one blog article that I had written a while back. But due to time constraints never got to publishing it. Here goes. Warning this is not a primer on Mule. For that go to mulesoft.org

Mule is a lightweight open source ESB that is not bogged down by standards but yet enables to integrate applications that support various open standards. By bogged down I mean - its not a JBI compliant ESB. Yes and it supports JMS, Web Services, Java POJO.

For a general overview of ESB see my article from some time back - Enterprise Service Bus. An ESB is made up of a high-performance messaging layer (such as ActiveMQ) and the ESB software itself (Mule). The ESB software provides integration services to create/host/manage business services, mediation, content transformation, routing, security and so on.

To get started go to http://www.mulesoft.org and download the software. I downloaded the full version - 2.2.1. For the messaging layer go download ActiveMQ from http://activemq.apache.org/ (the version I used is 5.2.0 ... latest as of today is 5.3.2). Finally I am using Eclipse 3.5 and also installed the Mule IDE Eclipse Plugin (Plugin).

Once you have the Mule Eclipse plug-in installed , create a new Mule project and then add the code below to make it work.

Our use case is:
- Purchase Orders come in XML .dat files to a folder.
- We want to scan the folder periodically and load up any incoming files.
- Now we want to check if the PurchaseOrder for item with part number as "WXJ-1".
- If the part number is found then that purchase order is routed to the JMS Queue "ImportantPOQ"
- All other orders are routed to JMS Queue "AllOtherPOsQ"

To begin, configure the JMS connector in the mule-config.xml:
    <mule-jms:activemq-connector name="jmsConnector"
        brokerURL="tcp://localhost:61616" />

Next Configure the file connector to poll for incoming files:
    <mule-file:connector name="fileConnector" fileAge="5000"
        autoDelete="false" pollingFrequency="5000" />

The service itself is configured as:
    <model name="myproject">
        <service name="FileReader">
            <inbound>
                <mule-file:inbound-endpoint address="file:///Users/mathew/temp/in"
                    moveToDirectory="/Users/mathew/temp/out" moveToPattern="#[DATE]-#[ORIGINALNAME]">
                    <mule-file:filename-wildcard-filter
                        pattern="*.xml" />
                </mule-file:inbound-endpoint>
            </inbound>
            <log-component></log-component>
            <outbound>
                <filtering-router>
                    <mule-jms:outbound-endpoint queue="ImportantPOQ" />
                    <mule-xml:xpath-filter pattern="/PurchaseOrder/Items/Item/@PartNumber"
                        expectedValue="WXJ-1" />
                </filtering-router>
                <forwarding-catch-all-strategy>
                    <mule-jms:outbound-endpoint queue="AllOtherPOsQ" />
                </forwarding-catch-all-strategy>
            </outbound>
        </service>

A Mule Service has a inbound router , a service component and an outbound router. The inbound router is used to configure how the service will consume messages and from where. In this case you can see the use of the file connector to read files from a predefined folder.

The Service component typically contains your business logic. For the purposes of this article I do not have a service component. Instead I use a predefined log component to print out the file contents to the console.

The outbound router is used to configure where the processed messages will go once the business component is finished with it. In this case I configure a filtering-router which sends messages to the queue ImportantPOQ if the part number is WXJ-1. The forwarding-catch-all-strategy router is used for all other unmatched messages.

I have a little utility class that spits out a bunch of test xml files. The project structure in Eclipse should look like this...


Ignore the java classes since my next task is to send the PurchaseOrder to a POJO service component. For the purposes of this article there is no java code used.

Start your ActiveMQ server and create the two queues noted in the mule-config.xml. Right click on the mule-config.xml -> RunAs -> Mule Server. If everything goes well the server should start and be polling for files in the predefined folder. Run the FileCreator.java to put some files into your folder and in a few seconds you should see that the files are processed, moved to the processed folder. Go to the ActiveMQ web admin page at localhost:8161/admin/queues.jsp. You should see messages in both queues. Most will end up in the AllOtherPOsQ. Any messages with partnumber as WXJ-1 will end up in the ImportantPOQ.

Download the project by clicking here Mule Sample.

Virtualization - From Individual Desktops to Servers
April 28, 2010 9:36 PM

There is a quiet but steady change going on in data centers and IT organizations around the world. Virtualized infrastructure. Underutilized single OS machines are now being re-targeted to run multiple operating systems and therefore scale horizontally thereby drastically reducing labor, hardware, OS & physical real estate costs in data centers. Add to that the overall reduction in lower energy costs and suddenly you have something really strong going on.

When I first played around with virtualization - I used Parallels Desktop on my MacBook Pro. It was great first brush with installing other OS's on the Mac. Then used Sun's VirtualBox to do the same. Both these and also VMware Player/Workstation required a host OS. In my case it was the Mac OSX and later my Windows OS. You then installed the target OS into the tool.



The host OS (HOS) is something that comes between the virtualization software and the hardware. This is fine for a desktop pattern of use. From a developer point of view this was fine too. Nowadays if you call vendors in for demos, quite often you see them bring up a VM (Virtual Machine) on their desktop and take us for a spin through their product.

But for a production environment, where you want to have as little fat between the virtualization software and the hardware, we had to find a way to get rid of the heavy HOS and instead replace it with lightweight (JEOS) OS sometimes also referred to as the hypervisor. JEOS - Just Enough Operating System. This would allow for efficient utilization of the hardware resources as well has add to it features such as fail-over which are required for critical production applications.

One hypervisor I am trying to install is the free VMWare ESXi. Unfortunately my old desktop does seem to pass the VMWare compatibility test. ESXi can be installed directly on your machine (it will replace any existing OS you have...so beware) and then you can use the VMware vSphere client to manage your ESXi server and install target OS's. The paid version of the server vSphere (vs free ESXi) supports load balancing and backup features for VM's (among a whole host of other additional features).

One neat product offered by WMware is the VMotion module. This allows for moving virtual machines from one server to another in a production environment with no impact to users. This feature depends on using the VMware cluster file system or VMware vStorage VMFS. The VM is stored in a single file and the vStorage VMFS allows multiple concurrent VM's to read and write to it. Now that the VM's can all access the same VM file, what has to be transferred, preferably over a high speed network, is the run-time state of a individual VM. Quite interesting...me thinks so.

Virtual Appliances
January 26, 2010 9:17 PM

Recently I had to review a product which was being offered as a Virtual Appliance. This was the first time I had come across something like this. Pretty soon I was "googling" around to understand this better.

A virtual appliance is a pre-built, pre-configured virtual machine that has an OS, the application and other supporting software all packaged together and ready to go. All you do is to use a product such as VMWare Fusion/Player or Sun VirtualBox to play the appliance and viola you have a running product. I downloaded a Wordpress Virtual Appliance from http://www.turnkeylinux.org/wordpress. Turnkey has many other appliances at http://www.turnkeylinux.org/

I have both VMWare Fusion and Sun VirtualBox on my Mac. Next I either open the appliance in VMWare or Import the appliance in VirtualBox. In a few minutes I had the VM ready and the Wordpress application working like a champ. In my excitement to play with this toy i installed appliances for bugzilla and drupal. The simplicity of this is what amazed me.

I find this approach of delivering software very innovative. It drastically simplifies the administrative tasks required to setup an application. I do not know how efficient this modal is in real work use, but frankly I don't care its just too elegant.

Too learn more about Virtual Appliances check http://www.vmware.com/appliances/getting-started/learn/overview.html




JEE - The Road Ahead with JEE 6
January 3, 2010 10:38 AM

Prior to JEE 5 , the JEE/J2EE platform did not go down that well across the developer community. As software evolves and new ideas flow in, it was only natural that JEE 6 would refine and become a better platform.

Spring Framework came into being partly due to the complexity ofJ2EE. But today SF itself is getting large. SF gave you an abstraction on existing frameworks and made it easier to wire different technologies together. But as the complexities in JEE are addressed you have to start thinking if SF is an overkill. OK I did not exactly mean to say that but you need to think about it.

Below are a few notable items from JEE 6 (I do not go into the merits of each nor do I list every new feature):
  • Context & Dependency Injection (CDI). This allows for any POJO to be injected into dependent classes. Right now you have the ability to inject container managed resources (such as connection pools, EJB's). You cannot inject regular POJO objects. JEE6 adds that ability.
  • CDI - Introduces annotations such as
    • @ManagedBean - Identifies a bean as a container managed bean.
    • @Model identifies a model object in a MVC architecture.
    • @*Scope annotations such as @SessionScoped which tie an instance of a bean to a scope such as request, session or application.
    • @Inject is used to inject managed beans into other dependent classes.
    • ...and so on...
  • CDI allows session beans to be used in place of JSF beans (influenced from Seam obviously). This means you can use session beans in your JSF EL expressions.
  • Bean Validations: Set of annotations to apply on java objects so that the validations can be shared between UI and backend tier.
  • Eliminate web.xml with new annotations such as @WebServlet, @WebFilter , etc.
  • Ability of servlets to respond to asynchronous requests (AJAX). This could greatly simplify the existing mechanisms on the server to handle such requests. Having the servlets take over this allows for more efficient use of server threads vs existing implementations.
  • Introducing the concept of web fragments. Basically separate web.xml files so that appropriate frameworks can be configured in their own files.
  • JSF 2.0 (includes annotations that let you eliminate the faces-config.xml file, AJAX support, facelets).
  • Simplication of EJB in 3.1 (no requirement for business interfaces, singleton beans, asyhnchronous session beans)
  • Concept of Profiles to differentiate between different configurations or stacks of JEE...similar to J2ME .See my article on J2ME at 
  • Ability to package EJBs as part of the WAR file.
  • Inclusion of JAX-RS into the JEE spec (REST with JAX-RS).
  • Support for JPA 2.0 (notably to me is the inclusion of a Criteria API...such as the one in Hibernate).
While a lot of things have been added into JEE 6, I do not see any NEW path-breaking changes (except for the asynch servlet invocation). Seems to me JEE is playing catch-up with innovations in other OS frameworks (such a Spring,hibernate,etc).

In conclusion - I wonder where the next set of innovations will come from and what form they will take. I like that Spring has introduced "Spring Integration". This should allow applications to efficiently handle common integration patterns.

Is Axis2 way too complicated for web apps?
December 10, 2009 10:24 PM

I recently had the pleasure of using Axis2 on a Web Application. For reasons I cannot go into here, we were unable to use my original choice SpringWS. We were using Spring so it would have been a piece of cake to expose a document style web service using SpringWS.

When I finally got my application modified to use Axis2 to expose the web service, the final result was not pretty. In Axis2 you have to create the following in your WEB-INF

WEB-INF
   - services\
             services.list (list of aar file names ... one on each line)
             service1.aar
             service2.aar
  - modules\
             modules.list (list of aar file names ... one on each line)
             module1.aar
  - conf\
             axis2. xml

Each aar file contains in my case
  service1.aar
        - META-INF\
              service.xml (this actually contains the service configuration for service1)

service.aar is a jar that contains your service configuration and any libraries that you may choose to include. In my case all libraries were in web--inf/lib. If you need to implement custom handlers (as was my case) you have to create modules and the corresponding module.aar file. If you want your handler to take effect you need to then pull out the axis2.xml file from the axis jar file, modify it to include your handler then put it into web-inf/conf

All this to even expose a simple secure hello world. I always wondered why Spring documentation did not cover integrating Spring with Axis2. Now I know. This is way too complicated !  Those who read this and say deploy as .jws , that wont work for me. Maybe it does for you.

Apache CXF - Simple WebService With Spring
October 7, 2009 11:02 PM

A reader posted a comment to one of my old blogs on XFire. That rekindled my interest so I checked the XFire web site only to be informed that XFire is now Apache CXF (version 2.2.3 at this moment in time).

So how hard would it be to convert my old example to CXF. Turned out to be a piece of cake. Had to change the following:
  1. Updated the maven dependencies to reflect CXF libraries.
  2. web.xml - to point to the CXF Servlet
  3. Spring context (app-context.xml) - It was now a lot simpler and cleaner.
  4. Finally I used CXF wsdl2java utils to generate a client
Maven pom.xml
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.aver</groupId>
<artifactId>echo</artifactId>
<packaging>war</packaging>
<version>1.0-SNAPSHOT</version>
<name>CXF Echo Service</name>
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring</artifactId>
<version>2.5.6</version>
</dependency>

<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-frontend-jaxws</artifactId>
<version>2.2.3</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-transports-http</artifactId>
<version>2.2.3</version>
</dependency>

<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.7</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<configuration>
<contextPath>/echoservice</contextPath>
<connectors>
<connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
<port>9090</port>
<maxIdleTime>60000</maxIdleTime>
</connector>
</connectors>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
</plugin>
</plugins>
</build>
</project>



The web.xml...
<web-app>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>
/WEB-INF/app-context.xml
</param-value>
</context-param>

<listener>
<listener-class>
org.springframework.web.context.ContextLoaderListener
</listener-class>
</listener>

<servlet>
<servlet-name>CXFServlet</servlet-name>
<servlet-class>
org.apache.cxf.transport.servlet.CXFServlet
</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>CXFServlet</servlet-name>
<url-pattern>/services/*</url-pattern>
</servlet-mapping>
</web-app>

The Spring context file - app-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/jaxws"
xsi:schemaLocation="http://www.springframework.org/schema/beans www.springframework.org/schema/beans/spring-beans-2.0.xsd
cxf.apache.org/jaxws

">cxf.apache.org/schemas/jaxws.xsd">

<import resource="classpath:META-INF/cxf/cxf.xml" />
<import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" />
<import resource="classpath:META-INF/cxf/cxf-servlet.xml" />

<jaxws:endpoint id="echoService" implementor="#echo"
address="/EchoService" />

<bean id="echo" class="com.aver.EchoServiceImpl" />
</beans
The wsdl2java command I executed was:

./wsdl2java -p com.aver.client -client localhost:9090/echoservice/services/EchoService?wsdl
Run the project using maven: mvn clean package jetty:run
The WSDL is located at the address mentioned above in the wsld2java command.

Finally I ran the test client that wsdl2java generated...class named EchoService_EchoServicePort_Client

The output was:
Invoking echo...
echo.result=echo: 'uyy ' received on 10-07-2009 10:53:59 PM

Click here to download the maven project for this blog.

Spring Batch 2.0
September 24, 2009 9:12 PM

Previously I had written a 3 part series on Spring Batch 1.x. Since then Spring 2.x has been released and I promised myself (and one reader) that I would get to updating my previous articles to reflect the new release.

Rather than create new articles I have updated the previous 3 articles with the changes made in 2.x.
Happy reading!

Comet , HTML 5 Web Sockets and Server Sent Events (SSE)
August 21, 2009 9:00 PM

There has always been a need to support UI's that constantly update themselves as new data becomes available on the server. THe implementation for this falls into two general categories: either client side polling or server side push.

Comet is the term that is often used to describe current Client side polling and server side push techniques. Lets delve briefly into what the techniques are and then move to HTML 5 Server Sent Events and WebSockets. Check out Cometd for an implementation that you can try out. Cometd implements the Bayeux spec. Implementations are available in multiple languages.


Comet Techniques

Client Side Polling

Client periodically polls the server for new data and updates the UI as the data becomes available. The benefit of this is in its sheer simplicity. Downside is that for medium to high volume sites we introduce too many useless round trips to the server.  From an architectural point of view if your volumes are relatively low this may be the right solution for you.

Client Side Polling - Long Polling
Slight variation from the previous ones. The client makes a call to the server for data. The server can hold onto the connection for some period of time. When data becomes available it is sent back and the connection is closed. If no data is available for a predefined time then the connection can be closed too. Once the connection is closed a new one is created and the process continues on. The obvious advantage of this is that we do not keep opening and closing too many connections from the client side.

Server Side Push
The idea here is for the server to push data updates to the client rather than the client polling the server for updates. In this data exchange pattern the client opens a connection to the server for which the server responds but the client does not close the connection. The client is in a continuous read mode on the connection. Whenever the server has data it sends it back within JavaScript <script> tags. Due to the way the browser renders incoming data, any JavaScript is executed which is how we can call our callback function in the client to render the updates.

If you want full duplex communication between client and server then two connections are required. One for the client-to-server communications and the other for the server-to-client communications. With browsers often supporting up to two connections for a server this can be a big limitation. So you need to use this approach carefully. Otherwise you can run into some interesting problems.


HTML 5

HTML 5 Server Sent Events (SSE)

SSE standardizes the Comet styles of communication between client and server. It adds a new DOM element and the following new tag to support it.

<eventsource src="http://myserver/dosomething" onmessage="alert(event.data)">

There is also a programmatic way to create this.
var es = document.createElement(”eventsource”);
es.addEventListener(”message”, callbackmethod, false);
es.addEventSource(”http://myserver/dosomething” );
document.body.appendChild(es);


Once the server has data it will stream it to the call back method registered above. As per the spec the events have a reconnection time. The browsers should check with the server for data updates once the reconnection time has elapsed.

HTML 5 Web Sockets
Web Sockets allow for full-duplex communication over a single connection and the API is exposed via JavaScript. The API is quite simple as can be seen from the example below:

var myconn = new WebSocket("ws://myserver/something");
myconn.onopen = function(event) {  }
myconn.postMessage("my message");
myconn.onread = function(event) { // u can read event.data) }
myconn.onclose = function(event) { }


The above is quite self explanatory. ws:// stands for web socket connection and wss:// represents the secure connection.

Web Socket implementations are not yet out there. That is especially the case with HTML 5 still not being official yet. Hopefully in the coming months we will have more widespread support for this. Remember for this to work there needs to be support on the browser as well as the web server.


Adobe Flex - Part 2
July 22, 2009 4:18 PM

I modified the previous Flex example app to implement the following:
  • Separated out related code into different physical files.
  • Created the src/components folder to hold reusable UI components in their respective .mxml files. 
  • Created a new Class called Movie (file name Movie.as) in a package named dto. This class will hold the values of our grid properties. We will load the XML into instances of this class and collect them in a ArrayCollection.
  • Finally we use some validators in our form to validate that both movie name and rating are entered. Unless entered (as valid) the Add Movie button will NOT be enabled.
To create a reusable component  right click on the src folder in Flex Builder and create a new Flex Component. Put the code in there(minus the mx:Application) and voila you have a reusable component.When you read the code notice how parameters are defined in the component files and then passed in from the root SecondFlexApp.mxml file.
Here is the new version of the SecondFlexApp.mxml file...
<?xml version="1.0"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:comp="components.*" initialize="initApp()">

   <mx:Script>
		<![CDATA[
			import dto.Movie;
			import mx.controls.Alert;
			import mx.collections.ArrayCollection;
		

            [Bindable]
            public var movieList:ArrayCollection;

            public var moviexml:XML = 
		      <movies>
			       <movie>
				     <name>Angels & Demons</name>
				     <rating>1</rating>
				   </movie>
				   <movie>
				     <name>Ice Age 3</name>
				     <rating>2</rating>
				   </movie>
				   <movie>
				     <name>Transformers: Revenge of the Fallen</name>
				     <rating>3</rating>
				   </movie>
		      </movies>;	
      		
            public function initApp():void {
                movieList = new ArrayCollection(); 
                for each (var movie:XML in moviexml.movie) {
                    var mv:Movie = new Movie();
                    mv.name = movie.name;
                    mv.rating = movie.rating;
                    movieList.addItem(mv);
                }
                movieGrid.initApp();
            }
       ]]>
	</mx:Script>	
	

	<mx:VBox width="50%">
	   <mx:Label color="blue" text="Movies I have seen..."/>
	   <comp:MovieGrid id="movieGrid" movieList="{movieList}" width="100%"/>
	   <comp:addmovieform movieList="{movieGrid.movieList}"/>
	</mx:VBox>
		
</mx:Application>        

  • The new xmlns:comp namespace which refers to the components folder (components.* allows us to refer to all components in the src/components folder.
  • We pass in references to values in the mx:VBox such as in comp:MovieGrid we pass in the movieXML parameter into the component.
  • The mx:VBoxis a layout container that places components vertically (there is an mx:HBox too if you want).
  • In the initApp method I iterate through the XML , create Movie instances and add them to the movieList ArrayCollection.
The Movie class is:
package dto {
	public class Movie {
		public var name:String;
		public var rating:Number;
		
		public function Movie() {
		}
	}
}


The complete Flex Archive can be downloaded here secondflexapp.zip


The running app with validation ON looks like this...


Adobe Flex - Part 1
July 22, 2009 8:20 AM

Adobe Flex is a framework/sdk provided by Adobe that allows us to build rich internet applications (RIA) using Flash as the underlying technology.

Web designers have been using Flash to design apps for the web for a longtime. But the approach to designing with Flash directly is not that conducive to mainstream application development. Thus Flex was created. Flex provides a programming platform which is more easily understood by application developers as compared to Flash. Flex uses two programming languages to achieve this.

MXML - is an XML based programming model to define your UI layout.
ActionScript - is an implementation of ECMAScript (javascript2.0)

While MXML is used to define the UI elements, ActionScript is used to put in the behavior logic. This allows us to separate UI vs controller code. Let us run through an example to get a quick intro. This example is a little more involved than a standard hello but simple nevertheless.

Applications coded in Flash or Flex are executed in the browser using the Flash Player. There is another runtime called the AIR which is used to execute the applications outside of the browser as standalone desktop applications. Which one you use will depend on your application needs. For this tutorial we will use the browser to run the applications.

What You Need to Install
Flex 3 downloads are available from http://www.adobe.com/products/flex/flexdownloads/.

You can download the Flex SDK free of charge. But then you would have to use the command line or some other editor to code the application. Instead I would strongly recommend you download the trial version of Flex Builder 3 Eclipse Plugin. If you have an existing eclipse setup then installing the plugin is the preferred way. After 30 days you have to buy this product. I personally feel that Adobe has priced the product too high  for regular developers and I hope they provide a free version for development soon.

I also suggest going through the tutorial at http://www.adobe.com/devnet/flex/videotraining/. I found this very useful and strongly recommend it for folks new to Flex.


Whatis the App?
The application is your personal movie tracker. It shows a grid with a list of movies you have watched and what rating you gave them. You can delete movies from the Grid or add new movies to the grid using a simple form.


Creatinga Basic Project
Once the Flex Builder3 plugin has been installed open up the Flex Debugging perspective and you should see something like this:
aa

Now create  new Flex project and name it 'FirstFlexApp'. This will create a project with the following structure:
aaa

  • .actionScriptProperties, .flexProperties contain various Flex related configuration.
  • .project is the Flex eclipse project configuration.
  • By default Flex Builder created a FirstFlexApp.mxml file. This is what you will run to execute your application.
Even though we have MXML to code the UI layout, you must be aware that MXML is converted to ActionScript during compile time. So it is entirely possible to build your application using ONLY ActionScript. But it would make it much harder to do so. Our application MXML code starts with...
<?xmlversion="1.0"?>
<mx:Applicationxmlns:mx="http://www.adobe.com/2006/mxml"creationComplete="initApp()">
...
</mx:Application>

The creationComplete hook is optional. We use it here to call afunction which will retrieve the current movie list (as XML) and thenpopulate the grid with the contents. creationComplete is invoked afterthe layout has been prepared and displayed. There is another hookcalled initialize which can be used instead. initialize calls afunction before the UI is displayed.

Next we add the initial XML data here as:
<mx:XMLList id="moviexml">
<movie>
<name>Angels&Demons</name>
<rating>1</rating>
</movie>

<movie>
<name>Ice Age3</name>
<rating>2</rating>
</movie>

<movie>
<name>Transformers:Revenge of theFallen</name>
<rating>3</rating>
</movie>
</mx:XMLList>

mx - is the namespace used to identify the MXML elements. XMLLIst is an alias for a ActionScript class. Following that are various properties/methods that you want to configure for this class. In this case id is used to give a unique name to this instance of XMLList. Here we setup 3 movies. There are no remote calls in this tutorial to make things easier.

Next here is the rest of the UI layout:
    <mx: Panel title="Movie List" height="100%" width="100%" 
        paddingTop="10" paddingLeft="10" paddingRight="10">

        <mx:Label width="100%" color="blue"
            text="Movies I have seen..."/>

        <mx: DataGrid id="movieGrid" width="50%" height="50%" rowCount="4" resizableColumns="true" editable="false" >
            <mx:columns>
                <mx: DataGridColumn dataField="name" headerText="Name"/>
                <mx: DataGridColumn dataField="rating" headerText="Rating" />
				<mx: DataGridColumn width="40" sortable="false" fontWeight="bold">
				    <mx:itemRenderer >
						<mx:Component>
                                                // refer to full listing for code here
						</mx:Component>  
				    </mx:itemRenderer>
				</mx: DataGridColumn>            
            </mx:columns>
        </mx: DataGrid>
        
        <mx: Panel width="359" height="170" layout="absolute">
            <mx:Label x="10" y="10" text="Movie Name:"/>
            <mx:Label x="42" y="36" text="Rating:"/>
            <mx:TextInput x="96" y="8" id="fldMovieName"/>
            <mx:TextInput x="96" y="34" width="25" id="fldMovieRating" maxChars="1"/>
            <mx:Button x="69" y="76" label="Add Movie" click="addMovie()"/>
        </mx: Panel>
        
    </mx: Panel> 
We define a label and then a data grid to display the XML data. The mxataGridColumn is used to define the columns in this table. Our 3rd column is a special non-data column which will display an X symbol to delete a row. The code for that can be seen in the full listing at the end. You can also see the text form to enter new movies. The mx:Button uses its click property to connect the click event to the addMovie function.

Finally here is the ActionScript code to bind the grid to the XML data.
<mx:Script>
<![CDATA[
   import mx.collections.XMLListCollection;
   import mx.controls.Alert;


   [Bindable]
   private var movieList:XMLListCollection; 
            
   public function initApp():void {
      movieList = new XMLListCollection(moviexml); 
      movieGrid.dataProvider = movieList;
   }

   private function addMovie():void {	        	
      if ( fldMovieName.text == "" || fldMovieRating.text == "") {
         Alert.show("Enter valid values for movie and rating.","Alert");
      }
      else {
         movieList.addItem(
           
              {fldMovieName.text}
              {fldMovieRating.text}
            );
      }
   }
]]>
</mx:Script>
The sample above is quite self-explanatory. The initApp creates a XMLListCollection and references that to the movieGrid.dataProvider.

Run the eclipse application and here what you will see in the browser:
aaa

Click on firstflexapp.zip to download the complete Flex Project Archive. You can export a Flex Archive by right clicking on the browser and performing an Export to Flex Archive (a zip file).

Facelets
June 16, 2009 7:34 PM

Facelets is a JSF framework to implement UI templating (like tiles,sitemesh). You can use Tiles to implement the templating portion but Facelets is built for JSF.

In addition to the templating feature you can also create reusable components using Facelets and if you like Tapestry then you can make use of a similar feature with Facelets wherein rather than using jsf tags in the JSP you can use jsfc to indicate the component you plan to use. Example:

<input type="text" jsfc="h:outputText" value="Printed using Using jsfc .. like Tapestry" />

Note:I put this example together quite some time back but forgot to publish this earlier. Now straight to an example. I assume a certain knowledge of JSF as required. If not sure you can download the complete working zip file and get an idea for yourself regarding JSF and Facelets.

First of all you create the template.xhtml which will define the layoutfor our application:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
     xmlns:h="http://java.sun.com/jsf/html"
     xmlns:f="http://java.sun.com/jsf/core"
     xmlns:ui="http://java.sun.com/jsf/facelets">
<head>
  <meta http-equiv="Content-Type" content="text/html;charset=iso-8859-1" />
  <title>Test Faceletswebapp</title>
</head>

<body>
   <h3><ui:insert name="title">DefaultTitle</ui:insert></h3>
   <hr />
   <p>
     <ui:insert name="body">Default Body</ui:insert>
   </p>
   <br />
   <br />
   <hr />
</body>
</html>

The above illustrates a very basic example. In the zip file I do no use the above template, instead the zip file has more elaborate layout.
  • <ui:insert name="title"> : Creates a placeholder to drop page titles.
  • <ui:insert name="body"> : Creates a placeholder to drop page content.
Now here is my content page index..xhtml:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
                     "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
     xmlns:ui="http://java.sun.com/jsf/facelets"
     xmlns:h="http://java.sun.com/jsf/html"
     xmlns:f="http://java.sun.com/jsf/core"
     xmlns:t="http://myfaces.apache.org/tomahawk">
<head>
  <title>Notes</title>
</head>

<body>
  <ui:composition template="/template.xhtml">
      <ui:define name="title">Facelet works</ui:define>
       This text will also not be displayed.

      <ui:define name="body">
          <h:form>
                 <h:commandLink value="Display All Notes" action="toNotes"/>
           </h:form>
     </ui:define>
  </ui:composition>
</body>
</html>

  • <ui:composition template="/template.xhtml"> : The composition tagis used to identify the template to be used for this page.
  • <ui:define name="title"> : The ui:define tag is used here toinsert the page title
  • <ui:define name="body"> : The ui:define tag is used here toinsert the page contents.
The web.xml is:
<?xml version="1.0"encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
                       
">java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

     <context-param>
           <param-name>javax.faces.DEFAULT_SUFFIX</param-name>
           <param-value>.xhtml</param-value>
     </context-param>

     <context-param>
           <param-name>facelets.DEVELOPMENT</param-name>
           <param-value>true</param-value>
     </context-param>

     <servlet>
           <servlet-name>Faces Servlet</servlet-name>
           <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
           <load-on-startup>1</load-on-startup>
     </servlet>

     <servlet-mapping>
           <servlet-name>Faces Servlet</servlet-name>
           <url-pattern>*.faces</url-pattern>
     </servlet-mapping>
</web-app>

Finally the faces-config.xml:
<?xml version="1.0"encoding="UTF-8"?>
<faces-config version="1.2"xmlns="http://java.sun.com/xml/ns/javaee"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://java.sun.com/xml/ns/javaeejava.sun.com/xml/ns/javaee/web-facesconfig_1_2.xsd" style="color:teal;">>

     <application>
           <view-handler>com.sun.facelets.FaceletViewHandler</view-handler>
     </application>

     <navigation-rule>
           <from-view-id>/index.xhtml</from-view-id>
           <navigation-case>
                 <from-outcome>toNotes</from-outcome>
                 <to-view-id>/notes.xhtml</to-view-id>
           </navigation-case>
     </navigation-rule>
</faces-config>

  • com.sun.facelets.FaceletViewHandler is the what does the templating magic.
Here is the screenshot of the home page:
aa

Clicking on Display All Notes will take you to the notes.xhtml page which is another static page with different content.
aaa

You can download the complete example by clicking here - faceletss.zip. Having done all of this I must say though that SiteMesh still remains my favourite templating engine. Not sure if it will work with JSF though.

REST with JAX-RS
May 2, 2009 5:52 AM

REST (REpresentational State Transfer) is an architecture style that describes how to use the Web (HTTP) to access services. JAX-RS (JSR 311) isa Java API to support implementation/access of REST web services using Java. This style was first documented by Ron Fielding.

There will be times we use REST principles without even knowing that we use REST (that is a mouthful). Ever since HTTP came around the largest REST implementation is the web itself.
  • REST prescribes that all services be treated as resources that can be accessed via a URL. Thus the web is REstful.
  • REST also requires the services to be stateless.
  • REST does NOT describe any common data exchange formats. The producer and consumer are free to choose whatever format they can agree on.
  • REST services in many cases are cacheable (though I think in many business cases it is not and thats just fine).
  • REST is commonly used over HTTP (though since it is an architectural style one can use the same principles on any transport).

Used over HTTP the following HTTP methods can be used:
  • GET to retrieve data.
  • DELETE to delete.
  • INSERT to add new data
  • PUT to update data.
Now compare all of this to SOAP based web services where you use a WSDL to publish your interface, we have the SOAP envelope that carries the payload and can optionally provide many services such as transaction,security, addressing, etc (basically the WS-* nightmare). But often we just need to access a simple service without the need for all of the SOAP complexity.. That is where the RESTful architecture style comes in.

On the Java side JAX-RS was introduced to provide a common API to implement/access REST based services in Java. Jersey is the open source reference implementation of REST. Lets get to an example and see how this works. I will implement my usual time service. Call a service to get time of the day in XML, plain text or JSON format.

In Eclipse create a dynamic web project. Here is my layout.
a

Here is the web.xml...
<?xml version="1.0"encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://java.sun.com/xml/ns/javaee"xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
   xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
">java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
    id="WebApp_ID" version="2.5">
   <display-name>jaxrs</display-name>

    <servlet>
       <display-name>jaxrs tryout</display-name>
       <servlet-name>jaxrsservlet</servlet-name>
       <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
       <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
       <servlet-name>jaxrsservlet</servlet-name>
       <url-pattern>/services/*</url-pattern>
    </servlet-mapping>
</web-app>

Here is the implementation of the TimeOfTheDayService.  I usethe JAX-WS annotations to configure various JAX-RS attributes.
package com.tryout;

import java.text.SimpleDateFormat;
import java.util.Calendar;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;

@Path("/timeoftheday")
public class TimeOfTheDayService {

     private static String PATTERN = "MM.dd.yyyy HH:mm:ss";

     @GET
     @Produces("text/plain")
     @Path("/asplaintext/{name}")
      public String getTimeOfTheDay(@PathParam("name") String name) {
           SimpleDateFormat df = new SimpleDateFormat(PATTERN);
           return name + "-" + df.format(Calendar.getInstance().getTime());
      }

     @GET
     @Produces("application/xml")
     @Path("/asxml/{name}/")
      public Time getTimeOfTheDayInXML(@PathParam("name") String name) {
           SimpleDateFormat df = new SimpleDateFormat(PATTERN);
           Time t = new Time();
           t.setName(name);
           t.setTime(df.format(Calendar.getInstance().getTime()));
           return t;
      }

      @GET
     @Produces("application/json")
     @Path("/asjson/{name}/")
      public Time getTimeOfTheDayInJSON(@PathParam("name") String name) {
           SimpleDateFormat df = new SimpleDateFormat(PATTERN);
           Time t = new Time();
           t.setName(name);
           t.setTime(df.format(Calendar.getInstance().getTime()));
           return t;
      }
}

  • @Path("/timeoftheday") - Specifies the URI part for all the services in thisclass.
  • @GET- Used to annotate the read method.
  • @Produces("text/plain")- Marks the method as a producer of plain/text content.
  • @Path("/asjson/{name}/") - Describes the specific method. The optional {name} describes in our case the parameter passed into this service method.
The URL to access the services would be one of:
  • http://localhost:8080/jaxrs/services/timeoftheday/asplaintext/mathew
  • http://localhost:8080/jaxrs/services/timeoftheday/asxml/mathew
  • http://localhost:8080/jaxrs/services/timeoftheday/asjson/mathew
The resource is identified via the URL and so is the parameter name inthis example.

The Time javabean class is:
package com.tryout;

import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;

@XmlRootElement(name = "clock")
public class Time {

      @XmlElement
      private String time;

      @XmlElement
      private String name;

      public void setTime(String time) {
           this.time = time;
      }

      public void setName(String name) {
           this.name = name;
      }
}

Deploy the web application. I used the embedded Tomcat instance in Eclipse to run this example. Access one of the URLs mentioned earlier and you will get the response in the appropriate format.

You can also use the jersey client API to access this service...
package com.tryout;

import com.sun.jersey.api.client.Client;
import com.sun.jersey.api.client.WebResource;

public class JSONClient {

      public static void main(String[] args) throws Exception {

           Client c = Client.create();

           //
           WebResource r = c
                       .resource("http://localhost:8080/jaxrs/services/timeoftheday/asplaintext/mathew");
           System.out.println("Plain Text=>> " +r.get(String.class));

           //
           r = c
                       .resource("http://localhost:8080/jaxrs/services/timeoftheday/asxml/mathew");
           System.out.println("XML=>> " + r.get(String.class));

           //
           r = c
                       .resource("http://localhost:8080/jaxrs/services/timeoftheday/asjson/mathew");
           r.accept("application/json");
           System.out.println("JSON=>> " + r.get(String.class));
      }
}

Once you execute this client you should get a response such as:

Plain Text=>>mathew-05.02.2009 08:31:35
XML=>><?xml version="1.0" encoding="UTF-8"standalone="yes"?><clock><time>05.02.200908:31:35</time><name>mathew</name></clock>
JSON=>> {"time":"05.02.2009 08:31:35","name":"mathew"}



Pretty simple ah! Enjoy.

Spring Batch 2.0 - Part III - From Database to Flat File
February 3, 2009 1:30 AM

In Part-II of this series on Spring Batch, I went through an example of reading from a flat file and persisting into the database. In this article I will go through the reverse. Read 200,000 rows from the database and export it into a comma separated flat file.

The export from the database to the flat file took around 10 seconds. That is excellent for Java-based batch processing. Again I must point out that this is relatively fast since I am using a local MySQL database and there is no processing related logic being performed during the entire process.

The file is a comma separated file with format => receiptDate,memberName,checkNumber,checkDate,paymentType,depositAmount,paymentAmount,comments

DDL for the database table :
 create table ledger (
 ID INT NOT NULL AUTO_INCREMENT,
 rcv_dt date,
 mbr_nm VARCHAR(100) not null,
 chk_nbr VARCHAR(10) not null,
 chk_dt date,
 pymt_typ VARCHAR(50) not null,
 dpst_amt double,
 pymt_amt double,
 comments VARCHAR(100),
 PRIMARY KEY (ID)
 

Here is the spring application context xml file...

<?xml version="1.0"encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"

      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns:aop="http://www.springframework.org/schema/aop"

      xmlns:tx="http://www.springframework.org/schema/tx"xmlns:context="http://www.springframework.org/schema/context"

      xmlns:util="http://www.springframework.org/schema/util"xmlns:batch="http://www.springframework.org/schema/batch"

 

      xsi:schemaLocation="

    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd

    http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd

    http://www.springframework.org/schema/aop www.springframework.org/schema/aop/spring-aop-2.0.xsd

    http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.0.xsd

    http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.0.xsd

   http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd" >

 

      <!-- 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS -->

      <context:component-scan base-package="com.batch"/>

 

      <!-- 2)DATASOURCE, TRANSACTION MANAGER AND JDBC TEMPLATE  -->

      <bean id="dataSource"

            class="org.springframework.jdbc.datasource.DriverManagerDataSource">

            <property name="driverClassName"value="com.mysql.jdbc.Driver" />

            <property name="url"value="jdbc:mysql://localhost/seamdb" />

            <property name="username"value="root" />

            <property name="password"value="root" />

      </bean>

 

      <bean id="transactionManager"

            class="org.springframework.jdbc.datasource.DataSourceTransactionManager">

            <property name="dataSource"ref="dataSource" />

      </bean>

      <tx:annotation-driven transaction-manager="transactionManager"/>

 

      <bean id="jdbcTemplate"class="org.springframework.jdbc.core.JdbcTemplate">

            <property name="dataSource"ref="dataSource" />

      </bean>

 

 

      <!-- 3) JOBREPOSITORY - WE USE IN-MEMORY REPOSITORY FOR OUR EXAMPLE -->

      <bean id="jobRepository"

            class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">

            <property name="transactionManager"ref="transactionManager" />

      </bean>

 

      <!-- 4)LAUNCH JOBS FROM A REPOSITORY -->

      <bean id="jobLauncher"

            class="org.springframework.batch.core.launch.support.SimpleJobLauncher">

            <property name="jobRepository"ref="jobRepository" />

      </bean>

 

 

      <!--

            5) Define the job and its steps. In our case I use one step. Configure

            its readers and writers

      -->

      <batch:job id="simpleJob">

            <batch:step id="step1">

                  <batch:tasklet>

                        <batch:chunk reader="cursorReader"writer="flatFileWriter"

                              commit-interval="1000" />

                  </batch:tasklet>

            </batch:step>

      </batch:job>

 

      <!--======================================================= -->

      <!--  6) READER -->

      <!--======================================================= -->

      <bean id="cursorReader"

            class="org.springframework.batch.item.database.JdbcCursorItemReader">

            <property name="dataSource"ref="dataSource" />

            <property name="sql"value="select * from ledger" />

            <property name="rowMapper"ref="ledgerRowMapper" />

      </bean>

 

 

      <!--======================================================= -->

      <!--  7) WRITER -->

      <!--======================================================= -->

      <bean id="flatFileWriter"class="org.springframework.batch.item.file.FlatFileItemWriter">

            <property name="resource"value="file:c:/temp/ledgers-output.txt" />

            <property name="lineAggregator">

                  <bean

                        class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">

                        <property name="delimiter"value="," />

                        <property name="fieldExtractor">

                              <bean

                                    class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">

                                    <property name="names"value="id,receiptDate,memberName" />

                              </bean>

                        </property>

                  </bean>

            </property>

      </bean>

</beans>

 

 


  • 1 through 4 are the same as the previous blog.
  • 5 - Defines the step job and its steps.
  • 6 - Registers a JdbcCursorItemReader which will read the rows from the database. It will then write them out to the flat file. The latest version also has a new reader JdbcPagingItemReader. This is a better option since it will read a predefined set of rows rather than making round trips for each row.
  • 8 - Configure the writer to write to a flat file

Here is the Java code:
 

// ====================================================

// Ledger BEAN

// ====================================================

package com.batch.todb;

import java.util.Date;

public class Ledger {
    private int id;
    private Date receiptDate;
    private String memberName;
    private String checkNumber;
    private Date checkDate;
    private String paymentType;
    private double depositAmount;
    private double paymentAmount;
    private String comments;

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public Date getReceiptDate() {
        return receiptDate;
    }

    public void setReceiptDate(Date receiptDate) {
        this.receiptDate = receiptDate;
    }

    public String getMemberName() {
        return memberName;
    }

    public void setMemberName(String memberName) {
        this.memberName = memberName;
    }

    public String getCheckNumber() {
        return checkNumber;
    }

    public void setCheckNumber(String checkNumber) {
        this.checkNumber = checkNumber;
    }

    public Date getCheckDate() {
        return checkDate;
    }

    public void setCheckDate(Date checkDate) {
        this.checkDate = checkDate;
    }

    public String getPaymentType() {
        return paymentType;
    }

    public void setPaymentType(String paymentType) {
        this.paymentType = paymentType;
    }

    public double getDepositAmount() {
        return depositAmount;
    }

    public void setDepositAmount(double depositAmount) {
        this.depositAmount = depositAmount;
    }

    public double getPaymentAmount() {
        return paymentAmount;
    }

    public void setPaymentAmount(double paymentAmount) {
        this.paymentAmount = paymentAmount;
    }

    public String getComments() {
        return comments;
    }

    public void setComments(String comments) {
        this.comments = comments;
    }
}

 

 

// ====================================================

// ROW MAPPER TO CONVERT DATABASE RECORD TO JAVA OBJECT

// ====================================================

package com.batch.fromdb;

import java.sql.ResultSet;
import java.sql.SQLException;

import org.springframework.jdbc.core.RowMapper;
import org.springframework.stereotype.Component;

import com.batch.todb.Ledger;

@Component("ledgerRowMapper")
public class LedgerRowMapper implements RowMapper {
    public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
        Ledger ledger = new Ledger();
        ledger.setId(rs.getInt("id"));
        ledger.setReceiptDate(rs.getDate("rcv_dt"));
        ledger.setMemberName(rs.getString("mbr_nm"));
        ledger.setCheckNumber(rs.getString("chk_nbr"));
        ledger.setCheckDate(rs.getDate("chk_dt"));
        ledger.setPaymentType(rs.getString("pymt_typ"));
        ledger.setDepositAmount(rs.getDouble("dpst_amt"));
        ledger.setPaymentAmount(rs.getDouble("pymt_amt"));
        ledger.setComments(rs.getString("comments"));
        return ledger;
    }

}


// ====================================================

// JUNIT CLASS

// ====================================================

package com.batch.fromdb;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.AbstractDependencyInjectionSpringContextTests;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.util.StopWatch;

@ContextConfiguration(locations = "classpath:com/batch/fromdb/contextFromDB.xml")
@RunWith(SpringJUnit4ClassRunner.class)
public class FromDBBatchTestCase extends
AbstractDependencyInjectionSpringContextTests {

private final static Logger logger = Logger
.getLogger(FromDBBatchTestCase.class);

@Autowired
private JobLauncher launcher;

@Autowired
private Job job;
private JobParameters jobParameters = new JobParameters();

@Before
public void setup() {
PropertyConfigurator
.configure("c:/mathew/springbatch2/src/com/batch/log4j.properties");
}

@Test
public void testLaunchJob() throws Exception {
StopWatch sw = new StopWatch();
sw.start();
launcher.run(job, jobParameters);
sw.stop();
logger.info(">>> TIME ELAPSED:" + sw.prettyPrint());
}

@Autowired
public void setLauncher(JobLauncher bootstrap) {
this.launcher = bootstrap;
}

@Autowired
public void setJob(Job job) {
this.job = job;
}
}



After running the test case, you will see a file c:\temp\ledgers-output.txt with 200,000 rows.
INFO FromDBBatchTestCase:44 - >>> TIME ELAPSED:StopWatch '': running time (millis) = 8927


Download Project Files for Part I, II & III:

Here is the Eclipse project containing source code for all three parts (with dependencies): springbatch.jar






Spring Batch 2.0 - Part II - Flat File To Database
February 2, 2009 11:16 PM

Part I of my Spring Batch blog ran through an example of a basic Spring Batch job. Now lets put together one that reads 200,000 rows from a flat file and inserts them into the database. The entire process took around 1 minute and 10 seconds to execute. That is pretty good time for Java-based batch processing. In all fairness I must point out that, this is relatively fast since, I am using a local MySQL database and there is no processing related logic being performed during the entire process.

The file is a comma separated file with format => receiptDate,memberName,checkNumber,checkDate,paymentType,depositAmount,paymentAmount,comments

The DDL for the database table:

 create table ledger (
 ID INT NOT NULL AUTO_INCREMENT,
 rcv_dt date,
 mbr_nm VARCHAR(100) not null,
 chk_nbr VARCHAR(10) not null,
 chk_dt date,
 pymt_typ VARCHAR(50) not null,
 dpst_amt double,
 pymt_amt double,
 comments VARCHAR(100),
 PRIMARY KEY (ID)
)



Here is the spring application context xml file...

<?xml version="1.0"encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"

      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns:aop="http://www.springframework.org/schema/aop"

      xmlns:tx="http://www.springframework.org/schema/tx"xmlns:batch="http://www.springframework.org/schema/batch"

      xmlns:context="http://www.springframework.org/schema/context"

 

      xsi:schemaLocation="http://www.springframework.org/schema/beans www.springframework.org/schema/beans/spring-beans-2.0.xsd

class="MsoNormal" style="margin-bottom: 0.0001pt; line-height: normal;">    http://www.springframework.org/schema/tx www.springframework.org/schema/tx/spring-tx-2.0.xsd

    http://www.springframework.org/schema/aop www.springframework.org/schema/aop/spring-aop-2.0.xsd

class="MsoNormal" style="margin-bottom: 0.0001pt; line-height: normal;">    http://www.springframework.org/schema/batch  size="1">www.springframework.org/schema/batch/spring-batch-2.0.xsd

    http://www.springframework.org/schema/context href="http://www.springframework.org/schema/context/spring-context-2.5.xsd">www.springframework.org/schema/context/spring-context-2.5.xsd" >

 

      <!-- 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS -->

      <context:component-scan base-package="com.batch"/>

 

      <!-- 2)DATASOURCE, TRANSACTION MANAGER AND JDBC TEMPLATE  -->

      <bean id="dataSource"

            class="org.springframework.jdbc.datasource.DriverManagerDataSource">

            <property name="driverClassName"value="com.mysql.jdbc.Driver" />

            <property name="url"value="jdbc:mysql://localhost/seamdb" />

            <property name="username"value="root" />

            <property name="password"value="root" />

      </bean>

 

      <bean id="transactionManager"

            class="org.springframework.jdbc.datasource.DataSourceTransactionManage">

            <property name="dataSource"ref="dataSource" />

      </bean>

      <tx:annotation-driven transaction-manager="transactionManager"/>

 

      <bean id="jdbcTemplate"class="org.springframework.jdbc.core.JdbcTemplate">

            <property name="dataSource"ref="dataSource" />

      </bean>

 

 

      <!-- 3) JOBREPOSITORY - WE USE IN-MEMORY REPOSITORY FOR OUR EXAMPLE -->

      <bean id="jobRepository"

            class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">

            <property name="transactionManager"ref="transactionManager" />

      </bean>

 

      <!-- 4)LAUNCH JOBS FROM A REPOSITORY -->

      <bean id="jobLauncher"

            class="org.springframework.batch.core.launch.support.SimpleJobLauncher">

            <property name="jobRepository"ref="jobRepository" />

      </bean>

 

      <!--

            5) Define the job and its steps. In our case I use onestep. Configure

            its readers and writers

      -->

      <batch:job id="simpleJob">

            <batch:listeners>

                  <batch:listener ref="appJobExecutionListener"/>

            </batch:listeners>

            <batch:step id="step1">

                  <batch:tasklet>

                        <batch:listeners>

                              <batch:listener ref="itemFailureLoggerListener"/>

                        </batch:listeners>

                        <batch:chunk reader="itemReader"writer="itemWriter"

                              commit-interval="1000" />

                  </batch:tasklet>

            </batch:step>

      </batch:job>

 

 

      <!--======================================================= -->

      <!--  6) READER -->

      <!--======================================================= -->

      <bean id="itemReader"class="org.springframework.batch.item.file.FlatFileItemReader">

            <property name="resource"value="classpath:com/batch/todb/ledger.txt"/>

            <!--property name="linesToSkip" value="1" /-->

            <property name="lineMapper">

                  <bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">

                        <property name="lineTokenizer">

                              <bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">

                                    <property name="names"

                                          value="receiptDate,memberName,checkNumber,checkDate,paymentType,depositAmount,paymentAmount,comments"/>

                              </bean>

                        </property>

                        <property name="fieldSetMapper"ref="ledgerMapper" />

                  </bean>

            </property>

      </bean>

 

      <bean id="inputFile"class="org.springframework.core.io.ClassPathResource">

            <constructor-arg value="com/batch/todb/ledger.txt"/>

      </bean>

</beans>



  • 1 through 4 are the same as the previous blog.
  • 5 - Defines the job and its steps. Also registers a job listener and a step listener
  • 6 - The reader used to read the flat file with comma separated columns. The FlatFileItemReader will read the rows from the flat file and pass it to the writer to persist to the database..
  • 8 - Not shown here is the item writer. It is configured using annotations and the class is LedgerWriter.
Now for some of the Java code.


//========================================================

// Ledger BEAN - Bean representing a single ledger

//========================================================

 

package com.batch.todb;

import java.util.Date;

public class Ledger {
    private int id;
    private Date receiptDate;
    private String memberName;
    private String checkNumber;
    private Date checkDate;
    private String paymentType;
    private double depositAmount;
    private double paymentAmount;
    private String comments;

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public Date getReceiptDate() {
        return receiptDate;
    }

    public void setReceiptDate(Date receiptDate) {
        this.receiptDate = receiptDate;
    }

    public String getMemberName() {
        return memberName;
    }

    public void setMemberName(String memberName) {
        this.memberName = memberName;
    }

    public String getCheckNumber() {
        return checkNumber;
    }

    public void setCheckNumber(String checkNumber) {
        this.checkNumber = checkNumber;
    }

    public Date getCheckDate() {
        return checkDate;
    }

    public void setCheckDate(Date checkDate) {
        this.checkDate = checkDate;
    }

    public String getPaymentType() {
        return paymentType;
    }

    public void setPaymentType(String paymentType) {
        this.paymentType = paymentType;
    }

    public double getDepositAmount() {
        return depositAmount;
    }

    public void setDepositAmount(double depositAmount) {
        this.depositAmount = depositAmount;
    }

    public double getPaymentAmount() {
        return paymentAmount;
    }

    public void setPaymentAmount(double paymentAmount) {
        this.paymentAmount = paymentAmount;
    }

    public String getComments() {
        return comments;
    }

    public void setComments(String comments) {
        this.comments = comments;
    }
}

 

//========================================================

// Ledger DAO - Used to persist ledgers to the ledger table

//========================================================

package com.batch.todb;

import java.sql.PreparedStatement;
import java.sql.SQLException;

import javax.sql.DataSource;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.PreparedStatementSetter;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;

@Component
public class LedgerDAOImpl extends JdbcTemplate implements LedgerDAO {

    @Autowired
    public void setDataSource(DataSource dataSource) {
        super.setDataSource(dataSource);
    }

    @Transactional(propagation = Propagation.REQUIRED)
    public void save(final Ledger item) {
        super
                .update(
                        "insert into ledger (rcv_dt, mbr_nm, chk_nbr, chk_dt, pymt_typ, dpst_amt, pymt_amt, comments) values(?,?,?,?,?,?,?,?)",
                        new PreparedStatementSetter() {
                            public void setValues(PreparedStatement stmt)
                                    throws SQLException {
                                stmt.setDate(1, new java.sql.Date(item
                                        .getReceiptDate().getTime()));
                                stmt.setString(2, item.getMemberName());
                                stmt.setString(3, item.getCheckNumber());
                                stmt.setDate(4, new java.sql.Date(item
                                        .getCheckDate().getTime()));
                                stmt.setString(5, item.getPaymentType());
                                stmt.setDouble(6, item.getDepositAmount());
                                stmt.setDouble(7, item.getPaymentAmount());
                                stmt.setString(8, item.getComments());
                            }
                        });
    }
}

 

//========================================================

// Ledger WRITER - Performs db operations on a given list of ledger objects

//========================================================

package com.batch.todb;

import java.util.Iterator;
import java.util.List;

import org.springframework.batch.item.ItemWriter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

@Component("itemWriter")
public class LedgerWriter implements ItemWriter {

    @Autowired
    private LedgerDAO itemDAO;

    public void write(List items) throws Exception {
        for (Iterator<Ledger> iterator = items.iterator(); iterator.hasNext() ) {
            Ledger item = iterator.next();
            itemDAO.save(item);
        }
    }
}

 

//========================================================

// Ledger MAPPER - Maps a set of fields for a single record to the Ledger bean

//========================================================

package com.batch.todb;

import java.text.DecimalFormat;
import java.text.ParseException;

import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.stereotype.Component;
import org.springframework.stereotype.Service;

@Component("ledgerMapper")
public class LedgerMapper implements FieldSetMapper {
    private final static String DATE_PATTERN = "mm/DD/yy";
    private final static String DOLLAR_PATTERN = "$###,###.###";

    public Object mapFieldSet(FieldSet fs) {
        Ledger item = new Ledger();
        int idx = 0;
        item.setReceiptDate(fs.readDate(idx++, DATE_PATTERN));
        item.setMemberName(fs.readString(idx++));
        item.setCheckNumber(fs.readString(idx++));
        item.setCheckDate(fs.readDate(idx++, DATE_PATTERN));
        item.setPaymentType(fs.readString(idx++));

        // deposit amount
        try {
            DecimalFormat fmttr = new DecimalFormat(DOLLAR_PATTERN);
            Number number = fmttr.parse(fs.readString(idx++));
            item.setDepositAmount(number.doubleValue());
        } catch (ParseException e) {
            item.setDepositAmount(0);
        }

        // payment amount
        try {
            DecimalFormat fmttr = new DecimalFormat(DOLLAR_PATTERN);
            Number number = fmttr.parse(fs.readString(idx++));
            item.setPaymentAmount(number.doubleValue());
        } catch (ParseException e) {
            item.setPaymentAmount(0);
        }

        //
        return item;
    }
}

 



The test driver again is a JUnit class.

package com.batch.todb;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.AbstractTransactionalJUnit4SpringContextTests;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.transaction.TransactionConfiguration;
import org.springframework.util.StopWatch;

@ContextConfiguration(locations = "classpath:com/batch/todb/contextToDB.xml")
@RunWith(SpringJUnit4ClassRunner.class)
@TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = false)
public class ToDBBatchTestCase extends
        AbstractTransactionalJUnit4SpringContextTests {

    private final static Logger logger = Logger
            .getLogger(ToDBBatchTestCase.class);

    @Autowired
    private JobLauncher launcher;

    @Autowired
    private Job job;
    private JobParameters jobParameters = new JobParameters();

    @Before
    public void setup() {
        PropertyConfigurator
                .configure("c:/mathew/springbatch2/src/com/batch/log4j.properties");
    }

    @Test
    public void testLaunchJob() throws Exception {
        StopWatch sw = new StopWatch();
        sw.start();
        launcher.run(job, jobParameters);
        sw.stop();
        logger.info(">>> TIME ELAPSED:" + sw.prettyPrint());

    }

    @Autowired
    public void setLauncher(JobLauncher bootstrap) {
        this.launcher = bootstrap;
    }

    @Autowired
    public void setJob(Job job) {
        this.job = job;
    }
}



Running the test case will insert approx 200k rows into the ledger table. It took roughly 1:12 seconds for the entire process to execute.

INFO ToDBBatchTestCase:46 - >>> TIME ELAPSED:StopWatch '': running time (millis) = 71678

Next move over to Spring Batch - Part III - From Database to Flat File

Please see Part III to download entire project file with dependencies
 
 
  
  

Spring Batch 2.0 - Part I - Simple Tasklet
January 31, 2009 8:06 AM

There is always a healthy debate when talking Java and batches. When I heard Spring Batch, I had to try it out. On a previous project, many eons back, I did some batch processing in Java. What hurt me there (after a lots of optimizations) was a call to another persons module. His module happily loaded up an entity bean. You can guess where that ended. Next release I went through the code and replaced the entity bean calls with ONE update SQL statement. That fixed things. I was processing 200k records in 15-20 minutes, with an extremely small memory footprint. Even this I could reduce further if I tuned another module. But the performance was deemed enough and we moved on.

What I personally felt from that experience was the need of a decent Java-based Batch processing framework. Of course having this does not mean use Java for batches. Sometimes for bulk processing doing it in the database may be the right approach.

In this blog I want to go over Spring Batch processing. We will start off with some definitions.

Job - A job represents your entire batch work. Each night you need to collect all of the 1)credit card transactions, 2)collect them in a file and then 3)send them over to the settlement provider. Here I defined three logical steps. In Spring Batch a job is made of up of Steps. Each Step being a unit of work.

Step - A job is made up of one or more steps.

JobInstance - A running instance of the job that you have defined. Think of the Job as a class and the job instance as your , well object. Our credit card processing job runs 7 days a week at 11pm. Each executions is a JobInstance.

JobParameters - Parameters that go into a JobInstance.

JobExecution - Every attempt to run a JobInstance results in a JobExecution. For some reasons Jan 1st, 2008 CC Settlement job failed. It is re-run and now it succeeds. So we have one JobInstance but two executions (thus two JobExecutions). There also exists the concept of StepExecution. This represents an attempt to run a Step in a Job.

JobRepository - This is the persistent store for all of our job definitions. In this example I setup the repository to use an in-memory persistent store. You can back it up with a database if you want.

JobLauncher - As the name suggests, this object lets you launch a job.

TaskLet - Situations where you do not have input and output processing (using readers and writers).  We use a tasklet in this blog.

The next three definitions do not apply to this blog since I will not be using them. Part II of this blog will show an example on these.

ItemReader - Abstraction used to represent an object that allows you to read in one object of interest that you want to process. In my credit card example it could be one card transaction retrieved from the database.

ItemWriter - Abstraction used to write out the final results of a batch. In the credit card example it could be a provider specific representation of the transaction which needs to be in a file. Maybe in XML or comma separated flat file.

ItemProcessor - Very important. Here you can initiate business logic on a just read item. Perform computations on the object and maybe calculate more fields before passing on to the writer to write out to the output file.

In this blog lets go through the age-old hell world example. Our job will run a task which prints out hello world. Not much happening here but will show all of the important concepts in work before Part-II where I use the reader and writer to read from a flat file and insert 200k records into the database (in about 1 minute). Wanted to throw that out to the naysayers who just hate doing batches in Java.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context"
xmlns:util="http://www.springframework.org/schema/util" xmlns:batch="http://www.springframework.org/schema/batch"

xsi:schemaLocation="
www.springframework.org/schema/beans www.springframework.org/schema/beans/spring-beans-2.0.xsd
www.springframework.org/schema/tx www.springframework.org/schema/tx/spring-tx-2.0.xsd
www.springframework.org/schema/aop www.springframework.org/schema/aop/spring-aop-2.0.xsd
www.springframework.org/schema/util www.springframework.org/schema/util/spring-util-2.0.xsd
www.springframework.org/schema/batch www.springframework.org/schema/batch/spring-batch-2.0.xsd
www.springframework.org/schema/context

">www.springframework.org/schema/context/spring-context-2.5.xsd">

<!-- 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS -->
<context:component-scan base-package="com.batch" />

<!-- 2) DATASOURCE, TRANSACTION MANAGER AND JDBC TEMPLATE -->
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost/seamdb" />
<property name="username" value="root" />
<property name="password" value="" />
</bean>

<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />

<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>


<!-- 3) JOB REPOSITORY - WE USE IN-MEMORY REPOSITORY FOR OUR EXAMPLE -->
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
</bean>

<!-- 4) LAUNCH JOBS FROM A REPOSITORY -->
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>


<!--
5) Define the job and its steps. In our case I use one step. Configure
its readers and writers
-->
<batch:job id="simpleJob">
<batch:step id="step1">
<batch:tasklet>
<batch:chunk reader="cursorReader" writer="flatFileWriter"
commit-interval="1000" />
</batch:tasklet>
</batch:step>
</batch:job>

<!-- ======================================================= -->
<!-- 6) READER -->
<!-- ======================================================= -->
<bean id="cursorReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select * from ledger" />
<property name="rowMapper" ref="ledgerRowMapper" />
</bean>


<!-- ======================================================= -->
<!-- 7) WRITER -->
<!-- ======================================================= -->
<bean id="flatFileWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="file:/Users/mathew/temp/ledgers-output.txt" />
<property name="lineAggregator">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value="," />
<property name="fieldExtractor">
<bean
class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="id,receiptDate,memberName" />
</bean>
</property>
</bean>
</property>
</bean>
</beans>
  1. Use annotations to identify and autowire my spring beans.
  2. Ignore the data source configuration. It is not used for this example. Because I have a DAO I use for Part II & III this is here.
  3. Configure the job repository. Using an in-memory persistent store for this example.
  4. The job launcher.
  5. Register the two beans that make up my 2 steps for the job. One prints hello world and the other the time of the day.
  6. Last but not the least is the Job definition itself. Note the batch:listener which registers a listener to track job execution.
Now here is the code for the 2 steps:
package com.batch.simpletask;

import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;

public class HelloTask implements Tasklet {

private String taskStartMessage;

public void setTaskStartMessage(String taskStartMessage) {
this.taskStartMessage = taskStartMessage;
}

public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
throws Exception {
System.out.println(taskStartMessage);
return RepeatStatus.FINISHED;
}
}

And here is the tasklet itself:
package com.batch.simpletask;

import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;

public class HelloTask implements Tasklet {

private String taskStartMessage;

public void setTaskStartMessage(String taskStartMessage) {
this.taskStartMessage = taskStartMessage;
}

public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
throws Exception {
System.out.println(taskStartMessage);
return RepeatStatus.FINISHED;
}
}

Last but not the least is my test driver that launches the batch itself. I use a Spring enabled JUnit test case to implement my driver.
package com.batch.simpletask;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.AbstractDependencyInjectionSpringContextTests;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.util.StopWatch;

@ContextConfiguration(locations = "classpath:com/batch/simpletask/simpletaskletcontext.xml")
@RunWith(SpringJUnit4ClassRunner.class)
public class SimpleTaskletTestCase extends
AbstractDependencyInjectionSpringContextTests {

private final static Logger logger = Logger
.getLogger(SimpleTaskletTestCase.class);

@Autowired
private JobLauncher launcher;

@Autowired
private Job job;
private JobParameters jobParameters = new JobParameters();

@Before
public void setup() {
PropertyConfigurator
.configure("c:/mathew/springbatch2/src/com/batch/log4j.properties");
}

@Test
public void testLaunchJob() throws Exception {
StopWatch sw = new StopWatch();
sw.start();
launcher.run(job, jobParameters);
sw.stop();
logger.info(">>> TIME ELAPSED:" + sw.prettyPrint());

}

@Autowired
public void setLauncher(JobLauncher bootstrap) {
this.launcher = bootstrap;
}

@Autowired
public void setJob(Job job) {
this.job = job;
}
}

The result of running this is (I removed the log4j log statements for clarity):
Hello World - the time is now
Thu Sep 24 21:36:03 EDT 2009
Spring Batch - Part II - Flat File To Database - Read from a comma separated file and insert 200k rows into a MYSQL database.
Spring Batch - Part III - From Database to Flat File - Read back the 200K rows and now write it out to a new file. Later.

Please see Part III to download entire project file with dependencies

Ubuntu + Apache2 + php + Zend
October 4, 2008 10:17 PM

At times I get so tired of Java that I just yearn for a different set of frameworks (or should I say an environment where there is a good , concise language and less frameworks to choose from). I have been interested in following stacks:
  • Ruby & Ruby on Rails
  • PHP & Zend Framework
  • Python & Django
I tinkered enough with Ruby On Rails to know that it is a good framework. Next I gave a shot to Zend, a PHP web development framework. Below is a story of that exercise. My home machine has Vista and I am quite sick of it. To free myself from all of this pain I installed the following:
In this blog I will go through my environment setup. In a latter one I will go into Zend itself. btw if you run into (as I did) resolution issues with Ubuntu don't lose hope. After breaking my head on that, I was able to locate an article that worked for me http://ubuntuforums.org/showpost.php?p=129379&postcount=21. Before this my screen resolution was at 800x600. Pretty useless. Now its a lot higher and life is good. Thanks to the poster above.

Next I installed Sun's Java 1.6 (oh oh but why). I am planning on using NetBeans for other java related work and also maybe php editing (have not tried that yet). The Ubuntu server image that I have did not have java in it. At least thats what the following command found
sudo update-alternatives --config java
No alternatives for java


I could not find a ready package to install jdk1.6 from Sun. After some tinkering around I executed the following (thanks to a great blog at http://fci-h.blogspot.com/2007/02/installing-jdk6-on-ubuntu.html). For your convinence I am repeating the steps (including an extra one to configure javac).
  • Download the linux version of JDK 1.6 from http://java.sun.com/javase/downloads/ea/6u10/6u10rcDownload.jsp#6u10JDKs.
  • Next chmod of the downloaded bin file so that we can execute it (lazy me did a chmod 777 jdk-6u10-rc2-bin-b32-linux-i586-12_sep_2008.bin)
  • Next execute that file ./jdk-6u10-rc2-bin-b32-linux-i586-12_sep_2008.bin
  • This should create a exploded directory jdk1.6.0_10
  • sudo mv jdk1.6.0_10 /usr/bin/jvm (make sure there is a folder named /usr/bin/jvm/jdk1.6.0_10)
  • To map java run command
    • sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_10/jre/bin/java 60 --slave /usr/share/man/man1/java.1.gz java.1.gz /usr/lib/jvm/jdk1.6.0_10/man/m
  • To map javac run command
    • sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_10/bin/javac 60 --slave /usr/share/man/man1/javac.1.gz javac.1.gz /usr/lib/jvm/jdk1.6.0_10/man/m
  • Thats it.
To verify run
sudo update-alternatives --config java
mathew@mathew-desktop:~$ sudo update-alternatives --config java
[sudo] password for mathew:

There is only 1 program which provides java
(/usr/lib/jvm/jdk1.6.0_10/jre/bin/java). Nothing to configure.
Of course type in java and javac on the command line as a final test. We digressed.  A lot of the commands so far and further down use sudo to execute them as root.
  • Next I installed PHP5. Ubuntu has described that in good detail at https://help.ubuntu.com/8.04/serverguide/C/php5.html. Not repeating it here. Most important - run the hello world php to make sure all is good.
  • Some useful apache2 file/folder locations.
    • To edit site site root folder => sudo vi /etc/apache2/sites-available/default
    • I changed my DocumentRoot to  /home/mathew/projects/zendqs/public (this is in preparation for my first Zend example...the quickstart example from the zend site).
    • To restart apache2 => sudo /etc/init.d/apache2 restart
  • Download the latest version of Zend (1.6 as of this blog http://framework.zend.com/).
  • Now we need to tell Apache2 about Zend. Edit the Apache2 configuration file /etc/php5/apache2/php.ini
    • Add the line: include_path = "/usr/share/ZendFramework-1.6.1"
    • As you can see I moved my ZendFramework download folder to the location above.
  • Remember this site if you run into any mod_rewrite issues http://www.huanix.com/2007/04/18/mod_rewrite-for-apache2-in-ubuntu-feisty-fawn-704/. This article will tell you how to enable mod_rewrite if it is not already done. You will come across mod rewrite when if you follow the Zend Quickstart example as I plan to.
One thing obvious here is that its a lot of work setting it up. But doing so gives you a better idea of what is going on and how things are wired. There is a wealth of information on the web. I am just putting it together here for the next person who googles around.

My next blog will be on using the Zend Framework itself.

Google Web Toolkit (GWT)
September 11, 2008 7:11 PM

I went through an example to try out GWT. The GWT website GWT has some very good documentation. I went through the stock watcher example and built it step by step myself. Made a few changes on the way, such as putting the whole thing in a TabPanel.

I believe Java UI frameworks, that produce web code (html), are a great alternative to DIY HTML/Javascript libraries. Everytime I look at AJAX web frameworks and libraries, I think many of us are missing the point. Focus on building the application and not tinker with so many frameworks. I think frameworks like GWT will provide a strong alternative in the years to come. Toolset (IDE) support will make or break these frameworks.

After implementing the example, I was amazed at the ease with which I could implement AJAX functionality. In this case the UI periodically polls the backend for stock price changes and updates the section of the page. The code was...

            // setup timer to refresh list automatically

            Timer refreshTimer = new Timer() {

                  public void run() {

                        refreshWatchList();

                  }

            };

            refreshTimer.scheduleRepeating(REFRESH_INTERVAL);


The refreshWatchList method is shown below...

      private void refreshWatchList() {

            // lazy initialization of service proxy

            if (stockPriceSvc == null) {

                  stockPriceSvc = GWT.create(StockPriceService.class);

            }

 

            AsyncCallback<StockPrice[]> callback = new AsyncCallback<StockPrice[]>() {

                  public void onFailure(Throwable caught) {

                        // do something with errors

                  }

 

                  public void onSuccess(StockPrice[] result) {

                        updateTable(result);

                  }

            };

 

            // make the call to the stock price service

            stockPriceSvc.getPrices(stocks.toArray(new String[0]), callback);

      }


Its so obvious what the above code does. Well almost obvious. There are some GWT specific weird stuff you need to do, but thats a small price to pay. Here is an image of the output screen...




The price and change columns periodically update themselves (without refreshing the whole page of course). Here is my complete Eclipse project if you want to try it out. Download StockWatcher

Spring Security - The new and improved ACEGI
August 25, 2008 10:06 PM

Ah! Good ol ACEGI. Or should I say the good old new ACEGI aka Spring Security. In one of my previous posts I blogged about configuring good ol ACEGI. Since I last looked at it ACEGI is now Spring Security. Configuration nightmare has been reduced greatly. Though you are handicaped if you have used it earlier...like me. I keep trying to link back to the old configuration.

For those attempting Spring Security for the first time, a piece of advice. Ignore all comparisons to ACEGI and move on. You will find your brain in better shape at the end of the excercise.

First of all to make my life easier I created a Dynamic Web Project in Eclipse 3.3.2. Next I have Tomcat configured to run within my IDE for quick testing. My goal is to simply protect a bunch of web pages using Spring Security. Its easy to extend this to larger web apps.

First lets see the web.xml below.

 

<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

      xmlns="http://java.sun.com/xml/ns/javaee"

      xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"

      xsi:schemaLocation="http://java.sun.com/xml/ns/javaee java.sun.com/xml/ns/javaee/web-app_2_5.xsd" style="font-size: 10pt; font-family: "Courier New";">

      id="WebApp_ID" version="2.5">

      <display-name>springsecurity</display-name>

      <context-param>

            <param-name>log4jConfigLocation</param-name>

            <param-value>/WEB-INF/log4j.properties</param-value>

      </context-param>

      <context-param>

            <param-name>contextConfigLocation</param-name>

            <param-value>

                  /WEB-INF/application-security.xml

                  /WEB-INF/application-service.xml

            </param-value>

      </context-param>

      <listener>

            <listener-class>

                  org.springframework.web.context.ContextLoaderListener

            </listener-class>

      </listener>

      <listener>

            <listener-class>

                  org.springframework.web.util.Log4jConfigListener

            </listener-class>

      </listener>

      <filter>

            <filter-name>springSecurityFilterChain</filter-name>

            <filter-class>

                  org.springframework.web.filter.DelegatingFilterProxy

            </filter-class>

      </filter>

      <filter-mapping>

            <filter-name>springSecurityFilterChain</filter-name>

            <url-pattern>/*</url-pattern>

      </filter-mapping>

      <welcome-file-list>

            <welcome-file>index.jsp</welcome-file>

      </welcome-file-list>

</web-app>


Only thing of interest is the listener org.springframework.web.filter.DelegatingFilterProxy. I would love to compare this to ACEGI configuration but I will resist the urge. Just forget any old stuff. Suffice it to say that all URLS with the configured pattern pass through this filter and Spring Security is "performed" on them.

Next let us see the login.jsp page:

<%@ include file="includes.jsp"%>

<%@ page import="org.springframework.security.ui.AbstractProcessingFilter" %>

<%@ page import="org.springframework.security.ui.webapp.AuthenticationProcessingFilter" %>

<%@ page import="org.springframework.security.AuthenticationException" %>

 

<html>

<head>

<title>Login</title>

</head>

<body>

<%

if (session.getAttribute(AbstractProcessingFilter.SPRING_SECURITY_LAST_EXCEPTION_KEY) != null) { %>

<font color="red"> Your login attempt was not successful, please try again.<BR>

<br/>

Reason: <%=((AuthenticationException)

  session.getAttribute(AbstractProcessingFilter.SPRING_SECURITY_LAST_EXCEPTION_KEY)).getMessage()%>

</font>

<%

}

%>

 

<form method="post" id="loginForm"

      action="<c:url value='j_spring_security_check'/>">Username: <input

      type="text" name="j_username" id="j_username" /> <br />

Password: <input type="password" name="j_password" id="j_password" /><br />

<input type="submit" value="Login" /></form>

</body>

</html>

 

Refer to the login form and the way the parameters are named. For Spring Security to pick up the attributes you must name it the same as above.

Most important ... here is the spring context file application-security.xml
 

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"

      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

      xmlns:security="http://www.springframework.org/schema/security"

      xsi:schemaLocation="

                  http://www.springframework.org/schema/beans www.springframework.org/schema/beans/spring-beans-2.5.xsd style="font-size: 10pt; font-family: "Courier New";">

                  http://www.springframework.org/schema/security www.springframework.org/schema/security/spring-security-2.0.xsd" style="font-size: 10pt; font-family: "Courier New"; color: teal;">>

 

      <security:authentication-manager alias="authenticationManager" />

 

      <security:http auto-config="true"

            access-denied-page="/accessdenied.jsp">

            <security:form-login login-page="/login.jsp"

                  authentication-failure-url="/login.jsp"

                  default-target-url="/index.jsp" />

            <security:logout logout-success-url="/login.jsp" />

            <security:intercept-url pattern="/index.jsp"

                  access="ROLE_ADMIN,ROLE_USER" />

            <security:intercept-url pattern="/admin/**" access="ROLE_ADMIN" />

            <security:intercept-url pattern="/**" access="ROLE_ANONYMOUS" />

      </security:http>

 

      <bean id="loggerListener"

            class="org.springframework.security.event.authentication.LoggerListener" />

 

      <security:authentication-provider>

            <security:password-encoder hash="md5"/>

            <security:user-service>

                  <security:user password="5f4dcc3b5aa765d61d8327deb882cf99" name="thomasm"

                        authorities="ROLE_USER,ROLE_ANONYMOUS" />

                  <security:user password="5f4dcc3b5aa765d61d8327deb882cf99" name="admin"

                        authorities="ROLE_ADMIN,ROLE_USER,ROLE_ANONYMOUS" />

            </security:user-service>

      </security:authentication-provider>

</beans>

 


  • Spring namespace is used to configure security.
  • security:authentication-manager need not be listed. If not a default will be created. List it if you need to refer it from some other configuration. In my case I just did to make the point.
  • security:http is very self explanatory. We configure form login page names and the roles-to-URL patterns to protect. You can choose to not have this mapping here and instead implement your own object definition source class and provide the info from there.
  • Finally security:authentication-provider is used to configure a password encoder and in this case a in-memory user store. The password is 'password'. I have listed the MD5 values above to show the use of the encoder.
Once logged in the user is sent to the index.jsp which shows some common content and some admin specific content (if user has admin role).

<%@ include file="includes.jsp"%>

<html>

<head>

<title>Home</title>

</head>

<body>

You are logged in. To log out click

<a href='<c:url value="j_spring_security_logout"/>'>log out</a>

<br />

<a href="admin/admin.jsp">admin</a>

<br />

<authz:authorize ifAllGranted="ROLE_ADMIN">

      <p style="font-weight: bold">This text is only visible to admin

      users.</p>

</authz:authorize>

 

</body>

</html>

 
  • Note the use of the taglib authz to show/hide admin related content.
Following screen shots show you how this all works...






The bold text above is only displayed to users with admin role.

Now for a few tips. Obviously you will not be hardcoding the user name and password into the configuration. Right my friend! Now for this you can implement your own class that gets the credentials from wherever you choose. This class then hooks into Spring Security. Here is a sample class CustomUserService.java where I have mocked the credentials in code.
 

package com.test;

 

import org.springframework.dao.DataAccessException;

import org.springframework.security.GrantedAuthority;

import org.springframework.security.GrantedAuthorityImpl;

import org.springframework.security.userdetails.User;

import org.springframework.security.userdetails.UserDetails;

import org.springframework.security.userdetails.UserDetailsService;

import org.springframework.security.userdetails.UsernameNotFoundException;

 

public class CustomUserService implements UserDetailsService {

      public UserDetails loadUserByUsername(String user)

                  throws UsernameNotFoundException, DataAccessException {

            User ud = null;

            if ("admin".equals(user)) {

                  GrantedAuthority[] auths = new GrantedAuthority[] {

                              new GrantedAuthorityImpl("ROLE_ADMIN"),

                              new GrantedAuthorityImpl("ROLE_USER"),

                              new GrantedAuthorityImpl("ROLE_ANONYMOUS") };

                  ud = new User(user, "5f4dcc3b5aa765d61d8327deb882cf99", true, true,

                              true, true, auths);

            } else if ("thomasm1".equals(user)) {

                  GrantedAuthority[] auths = new GrantedAuthority[] {

                              new GrantedAuthorityImpl("ROLE_USER"),

                              new GrantedAuthorityImpl("ROLE_ANONYMOUS") };

                  ud = new User(user, "5f4dcc3b5aa765d61d8327deb882cf99", true, true,

                              true, true, auths);

            }

            return ud;

      }

}



In the application-security.xml file you need to make the following change.
 

<security:authentication-provider user-service-ref="customUserService">

            <security:password-encoder hash="md5" />

</security:authentication-provider>


The rest of the code should work same. Finally one more useful feature that I have used with the good ol ACEGI. That is to provide security access protection to the service layer. Add the following to your configuration...
    <global-method-security secured-annotations="enabled" jsr250-annotations="enabled"/>
Next add the annotations @Secured( {"ROLE_SECRET_AGENT"} ) to your service methods.

J2ME/JavaME Visited Again
July 17, 2008 11:20 PM

It seems like an eternity since I last tried J2ME or JavaME as its known now. It was Sept 2003, when I was working at a product development company and was building a component using J2ME. I even managed to get an article published at http://my.advisor.com/articles.nsf/aid/12697.

When I tinkered with Android a few weeks back, I got the urge right then to revisit JavaME. I wanted to try out a JavaME example again and see if anything has changed. I still believe that JavaME will eventually die out in favor of a more full featured platform (whether it is Java SDK or something else I dunno). For a primer on JavaME stacks you can check my article on advisor above. It surprising how little has changed.

In this example I will build a JavaME application using Netbeans. The application will present the user with a screen to enter a ISBN number for a book. It will then make a remote web service call to validate that the ISBN number is valid or not. When I tried a similar webservice example in 2003, web services was not yet in the optional stack. Now it is. Previously I was using ksoap. Now I do not need to. I can use the built-in libraries (if the device supports that... and that is the big headache with either JavaME or Android....device capabilities).

Create a new NetBeans MIDP project named as ISBNValidator, using CLDC-1.1 and MIDP-2.1 configuration. The IDE will create a ISBNValidatorMidlet. Netbeans will throw you into a page flow designer. I switched to the source code view.  Change the code to:


package com.test;

import isbnservice.ISBNService_Stub;
import java.rmi.RemoteException;

import javax.microedition.lcdui.Command;
import javax.microedition.lcdui.CommandListener;
import javax.microedition.lcdui.Display;
import javax.microedition.lcdui.Displayable;
import javax.microedition.midlet.MIDlet;
import javax.microedition.midlet.MIDletStateChangeException;

public class ISBNValidator extends MIDlet implements CommandListener {

    private EnterISBNNumberForm isbnForm;

    public ISBNValidator() {
        isbnForm = new EnterISBNNumberForm("ISBN Validator", this);
    }

    protected void destroyApp(boolean arg0) {
    }

    protected void pauseApp() {
    }

    protected void startApp() throws MIDletStateChangeException {
        Display.getDisplay(this).setCurrent(isbnForm);
    }

    public void commandAction(Command cmd, Displayable disp) {
        if (cmd.getCommandType() == Command.EXIT) {
            destroyApp(false);
            notifyDestroyed();
        } else if (cmd.getLabel().equalsIgnoreCase("Check ISBN")) {
            final MIDlet parent = this;
            new Thread() {

                public void run() {
                    String result =
validateISBN(isbnForm.getIsbnNumber());
                    String msg = "ISBN Validd => " + isbnForm.getIsbnNumber() + ", is ";
                    ISBNValidatorResultForm resultForm = new ISBNValidatorResultForm(msg, result, (CommandListener) parent);
                    Display.getDisplay(parent).setCurrent(resultForm);
                }
            }.start();
        } else if (cmd.getLabel().equalsIgnoreCase("Main")) {
            Display.getDisplay(this).setCurrent(isbnForm);
        }
    }

    private String validateISBN(String isbn) {
        ISBNService_Stub stub = new ISBNService_Stub();
        String result = "bad isbn";
        if (isbn == null || (isbn.trim().length() != 10 && isbn.trim().length() != 13)) {
            return result;
        }
        try {
            if (isbn.trim().length() == 10 && stub.IsValidISBN10(isbn)) {
                result = "good isbn";
            } else if (isbn.trim().length() == 13 && stub.IsValidISBN13(isbn)) {
                result = "good isbn";
            }
        } catch (RemoteException e) {
            e.printStackTrace();
        }
        return result;
    }
}

In the method validateISBN you can see I do the web service call. Now you must be guessing how I got the stubs created. Netbeans has made that easy for us. Right click on the project and select "New JavaME Web Service Client". Provide the WSDL URL webservices.daehosting.com/services/isbnservice.wso?WSDL and you are done.

For sake of completeness here are the other 2 classes I coded. I put the forms in two independent classes.

Class EnterISBNNumberForm
package com.test;

import javax.microedition.lcdui.Command;
import javax.microedition.lcdui.CommandListener;
import javax.microedition.lcdui.Form;
import javax.microedition.lcdui.TextField;

public class EnterISBNNumberForm extends Form {

    private Command okCommand;
    private Command exitCommand;
    private TextField isbnNumber;

    public EnterISBNNumberForm(String title, CommandListener cmdlistener) {
        super(title);

        exitCommand = new Command("Exit", Command.EXIT, 1);
        okCommand = new Command("Check ISBN", Command.OK, 1);

        isbnNumber = new TextField("ISBN#: ", "", 13, TextField.ANY);
        append(isbnNumber);
        addCommand(okCommand);
        addCommand(exitCommand);
        this.setCommandListener(cmdlistener);
    }

    public String getIsbnNumber() {
        return isbnNumber.getString();
    }
}


Class ISBNValidatorResultForm
package com.test;
import javax.microedition.lcdui.Command;
import javax.microedition.lcdui.CommandListener;
import javax.microedition.lcdui.Form;
import javax.microedition.lcdui.StringItem;

public class ISBNValidatorResultForm extends Form {
    private Command okCommand;
    private StringItem box;
    private String value;

    public String getValue() {
        return value;
    }

    public void setValue(String value) {
        this.value = value;
    }

    public ISBNValidatorResultForm(String title, String value,
            CommandListener cmdlistener) {
        super(title);
        this.value = value;
        okCommand = new Command("Main", Command.OK, 1);
        box = new StringItem("ISBN Valid => ", this.value);
        append(box);
        addCommand(okCommand);
        this.setCommandListener(cmdlistener);
    }

}

If you run the project an emulator should pop up and you can launch the application. Following two images show the application in action:




Android for Mobile Apps
June 17, 2008 7:05 PM

Android is a complete open source stack that allows one to build mobile applications. The stack includes an operating system, middleware and common applications. It also provides us with a Java API to develop our own custom mobile applications. It does not discriminate between common applications vs custom applications. Everything that the common applications can do so can yours (making calls, sending SMS, etc.).

What gets me excited is the (Linux kernel based) OS. There is no open source OS on the mobile platform. This is a major plus when it comes to mobile environments.

Before going into an Android sample, it is obvious to ask...what about Java ME (previously called J2ME). I do not see any reason why there cannot be a Java ME runtime created for Android OS. With that we could continue to use Java ME. Also this would open up use of JavaFX on this platform. As of today I could not locate Java ME implementations. If the reader knows of one please do let me know.

The one concern I had was regarding the Optional API's in Android. It is a known fact that not all mobile devices are created equal (in hardware and other related device capabilities). Some devices will not support a certain feature therefore those API's will not work on them. This is exactly the reason why Java ME created configuration and profiles. So now the Android optional API is going to end up in the same situation as Java ME. Whatever they decide to call it eventually, there has to be some basic API sets such as Java ME's configuration and profile. So I do not see any big need to jump ship from Java ME to Android (other than the new car smell). I do believe that eventually there will be a Java ME implementation on the Android.

Cutting through the chase, lets build a quick hello world application and then move on to something bigger. An Android application consists of one or more of the following building blocks:
  • Activity - An activity is nothing but a user screen.
  • IntentReceiver - These are non UI components you build so that you can listen for external events such as a phone ring.
  • Service - Non-UI component that will always run in the background.
  • Content Provider - Allows the application to store data locally.

First get yourself a copy of Android from http://code.google.com/android/intro/installing.html. Follow the instructions there or just do the following:

  • Unzip the Android zip to some folder.
  • Add <android-folder>\tools to your OS path environment.
  • Install the Eclipse plugin via URL https://dl-ssl.google.com/android/eclipse/ . Restart Eclipse if prompted.
  • Open Eclipse Window->Preferences. Select Android on the left and set the SDL location to yours (example C:\mathew\android-sdk_m5-rc15_windows).

Now create a New Android project in eclipse. It will pop up


Click finish and you should now have your project ready. The wizard has created the following basic application class named HelloApplication.java

package com.hello;

import android.app.Activity;
import android.os.Bundle;

public class HelloApplication extends Activity {
    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle icicle) {
        super.onCreate(icicle);
        setContentView(R.layout.main);
    }
}


Lets modify this to

package com.hello;

import android.app.Activity;
import android.os.Bundle;
import android.widget.TextView;

public class HelloApplication extends Activity {
    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle icicle) {
        super.onCreate(icicle);

        TextView tv = new TextView(this);
        tv.setText("The ever so happy Hello World program.");
        setContentView(tv);

    }
}

Right click on the project ... select Run As -> Android Application. You should now see the Android emulator load up...



If all worked fine, you should now have a working development environment. Now lets dig a little deeper. In our HelloApplication class we built the UI using API's directly in the java code. Android provides an alternative approach to define your UI screens in an XML meta language. Android comes with a slew of layout managers to lay out your UI components. Specific UI components are valled View's and views can be grouped into ViewGroup's. View groups can contain either views or groups.

Lets modify our program to use XML for defining our screens (like HTML).


package com.hello;

import android.app.Activity;
import android.os.Bundle;

public class HelloApplication extends Activity {
    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle icicle) {
        super.onCreate(icicle);
        setContentView(R.layout.main);
    }
}

We have removed the java code to create UI widgets. R.layout.main now refers to res\layout\main.xml. XML inside it is:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical" android:layout_width="fill_parent"
    android:layout_height="fill_parent">
    <TextView android:layout_width="fill_parent"
        android:layout_height="wrap_content"
        android:text="Hello World, Now loaded from XML." />
</LinearLayout>


Running the application should now get you to the following output...


Spring LDAP Template
June 13, 2008 12:27 AM

Just like you have JDBC/Hibernate/iBatis templates in Spring, we also have an LDAPTemplate. You can download the spring LDAP library from http://springframework.org/ldap. I like this template approach simply because it lets us avoid common pitfalls such as not cleaning up resources after using an API (in JDBC its the connection, statement and resultset). Why bother when the template can do this for you. Same holds true for LDAP queries.

For this example I had the following setup:
  • Apache Directory Server 1.5.2. I decided to use the sample directory data.
  • Installed the Apache Directory Studio eclipse plugin.
To confirm your setup. Open eclipse and go to the LDAP perspective. Create a new connection with following information:
  • hostname - localhost
  • port - 10389
  • Bind DN or user - uid=admin,ou=system
  • password - secret (this is the default password for apache ds)
This should let you into the directory. Under dc=example,dc=com I added two organizations (asia and americas).


Now for the Spring stuff. 

package trial;

import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import
org.springframework.ldap.core.LdapTemplate;
import org.springframework.stereotype.Service;

@Service
public class LDAPSampleImpl implements LDAPSample {

      @Autowired
      private LdapTemplate ldapTemplate;

      @Override
      public List getOrgNames() {
            return ldapTemplate.list("");
      }
}


The spring XML file looks like:

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:aop="http://www.springframework.org/schema/aop"
      xmlns:context="http://www.springframework.org/schema/context"
      xsi:schemaLocation="http://www.springframework.org/schema/beans
           http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
           http://www.springframework.org/schema/aop
           http://www.springframework.org/schema/aop/spring-aop-2.5.xsd
           http://www.springframework.org/schema/context
             http://www.springframework.org/schema/context/spring-context-2.5.xsd">

      <context:annotation-config />
      <context:component-scan base-package="trial" />

      <bean id="ldapContextSource"
            class="org.springframework.ldap.core.support.LdapContextSource">
            <property name="url" value="ldap://localhost:10389" />
            <property name="base" value="dc=example,dc=com" />
            <property name="userDn" value="uid=admin,ou=system" />
            <property name="password" value="secret" />
      </bean>

      <bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate">
            <constructor-arg ref="ldapContextSource" />
      </bean>
</beans>


Everything above is self explanatory. Now for the test case to execute all of this.

package trial;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:spring-context.xml" })
public class DriverTestCase {

      @Autowired
      private LDAPSample ldap;
 

      @Test
      public void testGreeting() {
            System.out.println(ldap.getOrgNames());
      }
}


Running this unit test results in output
>> [ou=asia, ou=americas]

Maven Assemblies
May 13, 2008 11:07 PM

Due to requests I received for source code for my open source file parser project (http://www.javaforge.com/project/2066), I had to quickly figure out a way to package a zip file that would contain

- source code
- binary jar file
- javadocs
- license and release-changes text files.

Since I am using Maven, I found that assemblies can come to my help. Thought I'd jot down a few things here so that others can google to it and find the same.

I created a file src/main/assemply/src.xml. This contained a list of artifacts I wanted to package as a zip. The XML should be self-explanatory.
 
<assembly>
    <id>dist</id>
    <formats>
        <format>zip</format>
    </formats>
    <baseDirectory>flatfilereader</baseDirectory>
    <includeBaseDirectory>true</includeBaseDirectory>
    <fileSets>
        <fileSet>
            <directory>target</directory>
            <outputDirectory></outputDirectory>
            <includes>
                <include>*.jar</include>
            </includes>
        </fileSet>
        <fileSet>
            <directory>docs</directory>
            <outputDirectory></outputDirectory>
            <includes>
                <include>*.doc</include>
            </includes>
        </fileSet>
        <fileSet>
            <directory>target/site</directory>
            <outputDirectory></outputDirectory>
            <includes>
                <include>**/*</include>
            </includes>
        </fileSet>
        <fileSet>
            <directory>src/main/resources</directory>
            <outputDirectory></outputDirectory>
            <includes>
                <include>license.txt</include>
                <include>release-changes.txt</include>
            </includes>
        </fileSet>
    </fileSets>
</assembly>


In my pom.xml I have the following under the build section.
<build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.5</source>
                    <target>1.5</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-source-plugin</artifactId>
                <configuration>
                    <outputDirectory>target</outputDirectory>
                    <finalName></finalName>
                    <attach>false</attach>
                </configuration>
                <executions>
                    <execution>
                        <id>make-source-jar</id>
                        <phase>package</phase>
                        <goals>
                            <goal>jar</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-javadoc-plugin</artifactId>
                <configuration>
                    <outputDirectory>target</outputDirectory>
                </configuration>
                <executions>
                    <execution>
                        <id>make-javadoc</id>
                        <phase>package</phase>
                        <goals>
                            <goal>javadoc</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptors>
                        <descriptor>
                            src/main/assembly/src.xml
                        </descriptor>
                    </descriptors>
                </configuration>
            </plugin>
        </plugins>
    </build>


Once this is done, running command "mvn assembly:assembly" will trigger the build and the creation of my distributable file as target\flatfilereader-0.6-dist.zip




Spring 2.5 Annotations
March 13, 2008 7:03 PM

With release 2.5 of Spring we  have a more complete support in Spring for annotation driven configuration. All of that XML configuration stuff can be set aside now. Release 2.0 had previously introduced some other annotations, @Transactional being my favourite.

Lets go through a quick sample. All of the code below I developed as a simple Java project in Eclipse (3.3).
package trial;

public interface Greeter {
    public String getGreeting();
}

Nothing special, just a simple interface.
package trial;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class GreeterImpl implements Greeter {
    @Autowired
    private Clock clock;

    public String getGreeting() {
        return "Good day. The time is now " + clock.getTime();
    }
}

  • @Service marks this as a component managed by Spring.
  • @Autowired marks the field as a dependency which Spring will inject. You could provide a setter method for Clock and then have the @Autowired annotation against the method. But I am in favor of annotating the field itself. Why waste code lines just to do injection
Clock has a similar implementation. For the sake of completion here it is...
package trial;
public interface Clock {
    public String getTime();
}

package trial;
import java.util.Calendar;

import org.springframework.stereotype.Service;

@Service
public class ClockImpl implements Clock {

    @Override
    public String getTime() {
        return Calendar.getInstance().getTime().toString();
    }
}

Here is the Spring configuration.xml file...
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
           www.springframework.org/schema/beans/spring-beans-2.5.xsd
          
www.springframework.org/schema/aop
          
www.springframework.org/schema/aop/spring-aop-2.5.xsd
          
www.springframework.org/schema/context
          

   ">www.springframework.org/schema/context/spring-context-2.5.xsd">
   
<context:annotation-config />
    <context:component-scan base-package="trial" />
</beans>

  • The tag 'context:annotation-config' tells spring to go the annotation way.
  • The tag 'context:component-scan' tells spring to look for annotated classes in the specified package. Once you put this tag you do not need the 'context:annotation-config' tag.
  • Sometimes when we auto wire by type there may be multiple beans of the same type (or inherited type). One case is if you define multiple datsasources. How do you tell your bean in that case which one to pick. In this case use @Qualifier("beanid") to select the specific one.
Now to test this, lets write up a quick JUnit unit test using support from Spring TextContext Framework. To quote directly from the Spring documentation:
"The Spring TestContext Framework (located in the org.springframework.test.context package) provides generic, annotation-driven unit and integration testing support that is agnostic of the testing framework in use, for example JUnit 3.8, JUnit 4.4, TestNG 5.5, etc."

The code is show below.
package trial;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:context.xml" })
public class GreeterTestCase {

    @Autowired
    private Greeter greeter;

    @Test
    public void testGreeting() {
        System.out.println(greeter.getGreeting());
    }
}

@RunWith
- tells spring to use its test wrapper to run the junit4 test case.

Once you execute this test you will see something like:
>> Good day. The time is now Thu Mar 13 21:20:30 EDT 2008

To get you in a more lively mood...no its not alcohol...there is also a @Aspect  and @Pointcut tags which let you configure AOP related items.

More Annotations:
  • You can use @Repository to instead of @Service to mark DAO implementations.
  • @Component is a generic annotation to mark any object as a spring managed object. @Repository and @Service are special stereotypes that mark business service and DAO classes.
  • @Required is used to mark a property as required to be set. At runtime an IllegalArgumentException will be thrown if this dependency is not wired.
  • @Transactional is used to mark methods/classes as transactional. If applied on the class it applies to all methods otherwise to specific methods. For this to work you must set up a transaction manager and also enable annotation driven transaction demarcation using
    <tx:annotation-driven transaction-manager="txManager"/>
  • You can also use the JSR-250 annotation @Resource(name="abc") to perform dependency injection. Name will be the id of the bean.
  • Another pair of JSR-250 annotations that are supported are the lifecycle method annotations @PostConstruct and @PreDestroy. These will mark methods that need to be notified when those lifecycle events occur.

Ruby
March 11, 2008 7:53 PM

Most IT managers know about the precious stone ruby and not the programming language Ruby. Its a sad state of affairs where career IT managers are so far disconnected from what is happening in the technology world. Not much you and I can do about that...so let me do some Ruby"ing" here.

The point I want to emphasize in this blog is that Ruby reduces a lot of noise in languages such as Java.
#just print out hello world
puts "Hello World"

# create a method to print hello world
def echo(name)
  puts "Hello #{name}"
end

# invoke it
echo("Mathew")

# define a user info class
class User
  attr_accessor :userName
  attr_accessor :fullName
  attr_accessor :otherNames
 
  def initialize(userName = "none", fullName = "",otherNames="")
    @userName = userName
    @fullName = fullName
    @otherNames = otherNames
    greetings = ["hello", "namaste", "gutantag"]
    for i in 0...greetings.length
      puts">>" + greetings[i]
    end
  end
 
  def printOtherNames
    if @otherNames.nil?
      puts "[no othernames]"
    elsif @otherNames.respond_to?("each")
      @otherNames.eachdo  |i|
       puts #{i}
      end
    end
  end
 
  def toString
    puts "UserName->" + @userName +", FullName->" + @fullName
  end
end

# create an instance and call echo
e = User.new("mthomas", "mathew thomas", ["guru", "ggg"])
e.toString()

# print the attr value
puts e.userName

puts e.printOtherNames


Executing the above using the Ruby interpreter gets us:
HelloWorld
Hello Mathew
>>hello
>>namaste
>>gutan tag
UserName->mthomas, FullName->mathew thomas
mthomas


guru
ggg

To print 'Hello World' to the console
   puts "Hello World"

Create Methods
To define a new method
   
def echo(name)
       puts "Hello #{name}"
    end

Creates a new function, which we then call passing it the name value. I
f you want to provide default values for function parameters then
    def echo(name="none")

Creating Classes
We create a class User to hold some basic information. The method initialize is the constructor. You refer to class variables as @fullname.  Keyword attr_accessor allows us to access the value of the member variable like e.userName. Without this keyword we would not be able to access the member variable. 

Arrays/Lists
   greetings = ["hello", "namaste", "gutan tag"]
Declares a variable named 'greetings' with a list of strings. You do not need to specify types for your variables. Ruby figures out that based on the value you assign it. You can even change the type later by assigning a different value to varaible.

Closures
Support for closures, which are blocks of code that you can define and pass around. Look at the code sample below
     @otherNames.each do  |i|
       puts #{i}
      end

The class variable @otherNames is a list of strings. Note for the java programmer. Everything in Ruby is an object. So your variable is an object that inherits various methods from base classes in Ruby. All this happens auto-magically for you in Ruby. If the variable is an array then you have this method named 'each' which can be used to go over each and every item in the array. In the above example we go over each element and apply a block of code to execute against that element. The |i| is a temporary variable that holds the current element.
 
In Conclusion
What attracts me to Ruby is that it removes a lot of the noise out of my code. I am not giving up on Java anytime soon, but its definitely worth every techie's time to do some Ruby or any other dynamic language (try jython...though I hate the indentation approach). There is also JRuby which is a pure java implementation of the Ruby language.

Having more languages ported to run on the VM is a good thing....and of course Sun has to do that since Microsoft is doing this for their languages on top of CLI.  For the Java platform we already have Jython, JRuby and then there is Groovy.

But in the end of the day, if I were starting a brand new web project why not just develop using Ruby and Ruby On Rails. Sure we could use Groovy and Grails but for many applications tight integration with current J2EE libraries is not needed. In future we can use Jython 2.5 and Django too. I think the bigger challenge is finding enough good people to write in these new languages. Once that is achieved....we need another set to maintain them. Ruby code being so compact can sometimes be difficult to read. But then massive code bases in Java can be just as hard.

Subversion Basics
February 18, 2008 10:40 PM

This blog details a quick step-by-step detail for creating a subversion repository.

First install subversion binaries (http://subversion.tigris.org/).

Create the Repository
Next open up a command prompt and type in
> svnadmin create \svnrepository

This creates a new repository under folder \svnrepository. If you browse to this folder you will see folders/files that subversion uses internally to manage itself.

Next create the project directory.
> svn mkdir -m "Creating initial project root folder" file:///svnrepository/myapp1

You can later on choose to create another project such as
> svn mkdir -m "Creating initial project root folder" file:///svnrepository/myapp2

So now you have two projects myapp1 and myapp2. Subversion tracks all of your folders and files, so how you decide to organize your folder structure is up to you. Recommended structure is:

project\
    \trunk
    \branches
    \tags

  • trunk will contain your main line development
  • branches will contain just that. You can also use this folder to create temporary branches and then remove them later (remove does not delete the folder...it just removes it from the HEAD version).
  • tags will hold code that has been labeled over time

Upload the initial code base
So my sample project directory is:
myapp1\
    \trunk
        \dev
           \web
           \service
           \test
           build.xml
        \docs
    \branches
    \tags


Lets import this code into our repository for the first time.
> svn import myapp1 file:///svnrepository/myapp1 -m"initial load of myapp1 code base"

You will see some log messages from subversion indicating that the code is being checked in. Now what you have is a repository loaded with the first version of the project. Your original source structure is not "connected" to subversion. The import command leaves the originalsource as-is.

To get "connected" with subversion you will issue a checkout command such as:
> svn checkout file:///svnrepository/myapp1
Once again you will see a bunch of log messages indicating the files are being checked out.

Running Subversion as a Windows Service
In the commands above I am using the local repository URL file:///svnrepository/myapp1.What if others need to connect to the same repository. One option is toshare your folder. The better option is to run subversion as a service.You have various options to do this, but the simplest way is toregister the svnserve executable as a windows service.

> sc create svnserve binpath= "\"c:\ProgramFiles\Subversion\bin\svnserve.exe\" --service --root c:\\svnrepository"displayname= "Subversion" depend= tcpip start= auto

Enter the above command on a single line. Also if there are spaces in your path then escape with quotes.

Next  install the Eclipse Subclipse plugin. Open the SVNrepository perspective. Create a reference to a new repository location. In our case use the URL file:///svnrepository/myapp1or svn://localhost. Use svn: only if you have the svnserve executable running.

If you want to restrict who can access the repository then edit the file /svnrepository/conf/svbserve.conf. This file is only valid if you use svnserve executable, as is clearly mentioned in the comments inside this file. Create the users and their passwords in the  /svnrepository/conf/passwd file.


Working with subversion
  • Edit the code file. The edits are local and not visible to other users.
  • Check in the file. Now other users can "see" the changes when they sycnh to the HEAD version.
  • To get changes from others perform an 'update' on the folder. This will synch your working copy with the HEAD version.
  • Remember when you check a file into subversion it versions the entire code base and not just that one file. This way subversion is able to take you to the state of the code for a given checkin. This is slightly confusing for folks who are used to other repository tools that version file by file.
  • Of course in your local copy you can have files from various versions and not just the head.

Creating Tags (or labels)
A tag is a point-in-time snapshot of the state of the repository. Say you finished an important milestone (end of an iteration or sprint). You can create a tag to mark the repository state at that point-in-time.
svn copy svn://hostname/svnrepository/myapp1/trunk \
svn://hostname/svnrepository/myapp1/tags/R1-S1 \
-m "Release1-Sprint1 tag"
Creating Branches
You are done with Release-1 and now want to continue on release 2, but you know there is a 1.1 release planned with a few minor features. Release-2 will continue on the main line (trunk). For 1.1 we create a branch from the 1.0 release code (you probably tagged it).
svn copy svn://hostname/svnrepository/myapp1/trunk \
svn://hostname/svnrepository/myapp1/branches/release1.1 \
-m "Creating branch for 1.1."
You will notice that the commands to create a tag is the same as that for creating a branch. Subversion shows no difference in the two. Just remember to use tags to create point-in-time snapshots and branches to track and work on multiple streams of work.

You can create branches for work that is risky. Lets say a major refactoring effort and you do not want to mess with the mainline (trunk) yet. Create a branch and then do the refactoring there. Once everything is good to go on the new branch merge it into the main line.



Enterprise Service Bus
February 15, 2008 11:55 PM

Enterprise Service Bus (ES is an integration platform that facilitates the integration of various systems/applications using a common communication backbone and a set of available platform services.

The resources could be a Java/J2EE application, a .NET application, Web Service, Message Queue, FTP server, databases such as Oracle, etc.

In large organizations it is often the case that applications have been  built over time to serve specific business needs and use different technologies (j2ee,.net, etc). You are faced with a challenge of "talking" to these different resources and providing a new set of services (tied to some business need of course). You could roll up your sleeves and write code to do all of this. Or you could take the help of a ESB product that provides you an umbrella under which you can implement your new service(s).

Let me come up with a mock business scenario:
  • Talk to a J2EE application and get the status of inventory for a certain product. This is via a EJB interface (stateless session bean).
  • Talk to a .NET application to find out current vendors, who we have contracted with, to purchase products.
  • Send an email approval request to a requisition manager and wait for his approval before proceeding with the purchase.
  • Once approved, send the order to vendor-A via ftp. Vendor-A loves FTP and comma delimited text file formats. So we need to transform the message to their needs.
  • Monitor recepit of a file from vendor-A which says "yes we received the order...we expect to ship it on blah date". 
  • Send an XML document with transaction details (via Web Services) to the payment processing system to approve payment for this order.
  • We need to make sure everything is audited. 
  • One last thing....only authorized users can perform these steps. Integrate with the cooporate LDAP server for authentication needs.
Well in the real world there is probably more to it, but you begin to get the idea. Now lets map out what we are up against from a technical point of view:
  • The entire process is a workflow with clearly defined tasks happening along the way. So we need a way to describe the business process and orchestrate the entire flow.
  • The workflow has a manual task in there (manager approval). So this is a asynchronous workflow that can take many days to execute.
  • Distributed applications and technologies (j2ee, .net). 
  • Different communication mechanisms (web services, EJB/RMI, FTP, email). 
  • Data transformation is required at various steps.
  • Security. Only authorized individuals can use this system. How do we interoperate the different security mechanisms used.

It is entirely possible for us to build this from scratch. But it is a significant effort. Why not use a ESB framework that facilitates this. An ESB framework provides many of these infrastructure services so that we may concentrate on implementing the business process.


 An ESB framework supports:
  • An SOA based architecture to build services.
  • A reliable messaging backbone (or bus) via which the various resources can communicate. Earlier I listed some types of resources. They were all external to the ESB. You could also have resources internal to the ESB, like specific business components residing inside.
  • A message routing mechanism so that messages flow from one resource to the other. No more direct resource to resource links (via code).
  • Content based routing. 
  • Data transformation. Change one XML document to another XML doc with different schema or from XML to a comma delimited text file.
  • Provides adapters which deal with talking to resources, thereby saving you the trouble of writing resource specific code.
  • Adapters also perform marshalling and unmarshalling of data between the resource and the message bus.
  • A business process modeling language and runtime to manage business workflows (BPEL).
  • Transactional services.
  • Security.
  • Supports open standards such as Web Services (SOAP, WSDL, UDDI).
  • Supports various communication mechanisms such as FTP, Secure FTP, CORBA. RMI, REST, HTTP, SSL, SMTP, etc.
  • Some ESB's will also provide support for SOA Governance. This allows enforcing and tracking service usage.
When do you need an ESB? If your application topology (and requirements) starts looking like the diagram above (minus the integration bus of course) then you need one.

jQuery
January 24, 2008 9:08 PM

Some weeks back I came across jQuery. Once in a while a library comes along that makes you say "wow". To me jQuery is among those. jQueryis a JavaScript library that allows you to perform complicated (orsometimes not complicated but boring) JavaScript work in just a fewlines.

What's the fuss about jQuery! Lets say you want to do the following:
  1. When the page is loaded, draw some tabbed panes.
  2. When the page is loaded, make an ajax call to get an RSSxml file and then parse through the XML response and display all blogtitles.
  3. When a link is clicked you want some list items (in litags) to turn blue via css manipulation.
  4. And you want to display the number of paragraphs within adiv.
  5. Last you want the div to hide (slowly).
  6. Oh also when you click on the tabs you want to print outthe current time.
Now imagine all of this on a single html page (small page). Thats afair bit of JavaScript there. With jQuery I put all of that on a singlehtml page so you can see its power.

Here is the complete html code...save it to file test.html. Note that Iinclude the latest js code directly from jQuery site. If you downloadthe library then change the script include paths appropriately.
<html>
    <head>
       <script type="text/javascript"src="http://jqueryjs.googlecode.com/files/jquery-1.2.2.min.js"></script>
       <link rel="stylesheet"href="http://dev.jquery.com/view/trunk/themes/flora/flora.all.css"type="text/css" media="screen" title="Flora (Default)">
       <script type="text/javascript"src="http://dev.jquery.com/view/trunk/ui/current/ui.tabs.js"></script>
       <script type="text/javascript">
           
           $(document).ready(function(){
               // draw the tabs
               $("#example > ul").tabs();
               
               // make the ajax call to load the atom xml
               $.ajax({
                   url: 'atom.xml',
                   type: 'GET',
                   dataType: 'xml',
                   timeout: 1000,
                   error: function(){
                       alert('Error loading XML document');
                   },
                   success: function(xml){
                       $(xml).find('entry').find('title').each(function(){
                           var item_text = $(this).text();
                           $('<li></li>')
                           .html(item_text)
                           .appendTo('ol').appendTo("#titles");
                       });
                   }
               });
               
               // attach a event handler to ALL links on the page
               $("a").click(function(){
                   // change the style of the links to bold
                   $("a").addClass("test");
                   
                   // effects example...hide the clickme link
                   $("#clickme").hide("slow");
                   
                   // change the li items to blue color
                   $("#orderedlist > li").addClass("blue");
                   
                   // print the number of paras and the current time
                   $("#numparas").text("Number of paras " + $("#mycontainer p").size() +". Time is "+ new Date());
                   
                   return false;
               });
           });
       </script>
       
       <style type="text/css">
           a.test { font-weight: bold; }
           .blue {color:blue}
       </style>
       
    </head>
    <body>
       <a id="clickme"href="http://blogs.averconsulting.com/">Click here top hide meand make the li items blue!!</a>
       
       <ul id="orderedlist">
           <li>test1</li>
           <li>test2</li>
       </ul>
       <div id="titles">
           <b><u>Some of my bloggedtopics...</u></b><br/>
       </div>
       <div id="mycontainer">
           <p>para1</p>
           <p>para2</p>
           <p>para3</p>
       </div>
       
       <div id="numparas">
       </div>
       <div id="example" class="flora">
           <ul>
               
               <li><ahref="#fragment-1"><span>Tab-1</span></a></li>
               <li><ahref="#fragment-2"><span>Tab-2</span></a></li>
               <li><ahref="#fragment-3"><span>Tab-3</span></a></li>
           </ul>
           <div id="fragment-1">
               Tab-1 content here...
           </div>
           <div id="fragment-2">
               Tab-2 content here...
           </div>
           <div id="fragment-3">
               Tab-3 content here...
           </div>
       </div>
    </body>
</html>

Download the atom.xml file and copy it to the same folder as test.html.Fire up test.html in your browser.

A few pointers:
  • $("#orderedlist > li") means return the li elementsinside the element with id 'orderedlist'. 
  • $.ajax is the call to the ajax function. If you arefamiliar with JSON you should be fine reading that code.
  • $(xml).find('entry').find('title').each(function(){....}parses the AJAX XML response. Finds all entry/title elements and foreach executes the anonymous function.
  • $("a").click(function(){....} attaches an anonymous eventhandler function to ALL link tags.
  • $("a").addClass adds a style class to all link tags.
  • $("#mycontainer p").size() means count the number of<p> tags inside the div element with id 'mycontainer'.
  • $("#numparas").text means replace the text inside.
There is the jQuery library and then there is the jQuery UI library.The tabs example above uses the jQuery UI library. There are other UIwidgets built around jQuery and I like what I see. Though I wish thetable supported inline editing of cells. There is a plugins thatallowed editing but I will better if the main table code supportsediting. For now 

AJAX app performance tuning
January 11, 2008 8:29 PM

Lets say you have a web application that uses some YUI widgets and also some other commercial AJAX widgets. Now these frameworks contains many .js and .css files that need to come down to the browser. To add to this mix you have your own javascript files. Caching these files in the browsers' cache will help performance and give the user a better experience. Also since some of these files are quite large we can consider gzipping them on the server to reduce the payload size.

To get to this analysis I used Yslow Firefox plugin (http://developer.yahoo.com/yslow/). It analyzes the current page and gives it a grade. The application I tested did not get grade 'A' but I cannot control the remaining items (such as CDN). I would strongly recommend folks to use Yslow to at least see where things stand. Also check out http://developer.yahoo.com/performance/rules.html.

First things, I need to cache .js and .css files in the browser cache. I was using Weblogic 8.1 and could not find a configurable way to set expires headers for .js files that are included like:

<script type="text/javascript" src="<%=context%>/js/yui/<%=yuiVrsn%>/yahoo-dom-event/yahoo-dom-event.js"></script> 
<script type="text/javascript" src="<%=context%>/js/myapp.js?<%=bldVrsn%>"></script>

I put in a version as part of the URL since I do not want to be at the mercy of the users browser settings. If I change the YUI version on the server then it will force the browser to pull down the new file and NOT use any old cached version with the same file name. I follow the same approach for 3rd party files as well as my own (with my own files I use a build version).

If you are using Apache Web Server then there are configuration parameters you can setup to control cache headers and gzipping (using mod_gzip). I could not find a configurable approach in Weblogic 8.1. Either it is hidden deep inside or it just does not exist.

So instead I put in a couple of Servlet filters to do the job for me. First one is a custom servlet filter that is mapped to all .js and .css files. This one simply adds an Expires header with 30 days into the future. I can do 30 days since I know I have control of the URL via version numbers. The second filter is for gzip. I was about to create my own filter when I noticed a gzip filter inside ehcache (net.sf.ehcache.constructs.web.filter.GzipFilter). I looked at the source and it did what I needed so I used it. The main thing that this filter does is to check the 'Accept-Encoding' header in the request to make sure that the client can support gzip and then if that’s good uses the java gzip libraries to compress the contents.

One other thing Yslow reported was to reduce the number of .js files. With framework files such as YUI I cannot control that, but with my own libraries I can. I did merge some utility scripts into a single file. Not a big difference in the end but that’s what tuning is. Getting a little here and there and the grand total adds up fast.

If there are other ways folks have implemented this then please let me know. Remember I am own Weblogic 8.1.

YUI Datatable - select all with checkbox column
January 2, 2008 6:54 PM

I am using YUI (version 2.4.1) datatable for tabular data rendering. I needed to allow the users to have the ability to select one or more rows using checkboxes and then submit them to the server for further action. Up to this point you can find sample code on YUI website or other sites if you search around.


For a better user experience I also needed to give them links to 'Select All' and 'Unselect All' which did exactly that. This I was not able to find on the web (I am sure it is there somewhere). Since I finally figured it out, I thought it only fair to blog-it so others can find. Code is quite simple.

In the sample below I will only highlight the code necessary to make this happen. For more details on YUI datatable refer to Yahoo site.

To start off lets add a check box to the table. In the sample below the first column is the check box column.

myColumnDefs = [
     {key:"checked",label:"", width:"30", formatter:YAHOO.widget.DataTable.formatCheckbox},
     {key:"id", label:"ID",sortable:true, resizeable:true, width:"50"},
     {key:"name", label:"Name",sortable:true, resizeable:true, width:"250"},
     {key:"netamount", label:"Amount",sortable:true,resizeable:true,width:"100", formatter:YAHOO.widget.DataTable.formatCurrency}

];


myDataSource.responseSchema = {
        resultsList: "records",
        fields: [
             {key:"checked", parser:YAHOO.util.DataSource.parseBoolean},
             {key:"id", parser:YAHOO.util.DataSource.parseNumber},
             {key:"name", parser:YAHOO.util.DataSource.parseString},
             {key:"amount", parser:YAHOO.util.DataSource.parseNumber}
           ]
      };


In my case the data is coming via. JSON and I need the default case to be select all. Thus my JSON result set will have checked = true for all rows.

Next here is the code for 'select all' and 'unselect all'. You can be smarter about this and use one function if you care.
 function selectAll() {
        records = dataTable.getRecordSet().getRecords();
   for (i=0; i < records.length; i++) {
        dataTable.getRecordSet().updateKey(records[i], "checked", "true");
    }
   dataTable.refreshView();
   return false;
}
   
function unselectAll() {
        records = dataTable.getRecordSet().getRecords();
   for (i=0; i < records.length; i++) {
        dataTable.getRecordSet().updateKey(records[i], "checked", ""); 
   }
   dataTable.refreshView();
   return false;
}

The links on the web page.
<a id="selectall" href="#" onclick="return selectAll();">Select All</a> &nbsp;|&nbsp;
<a id="unselectall" href="#" onclick="return unselectAll();">Unselect All</a>




AJAX Grid Widget
December 18, 2007 10:35 PM

For a recent project I was using Yahoo's YUI datatable component. With YUI often you can copy code from their site and tweak it to your needs. That to me is a sign of great developer documentation. This is especially needed in case of lots of JavaScript code. I have spent hours trying to figure out why code works in Firefox and not in IE. Only to find an extra comma inside my JSON code. IE is not forgiving regarding the extra comma.

Due to new requirements I needed to allow users to edit records in the table. I found Yahoo's cell editing capabilities a little out of the ordinary with pop-up's coming up to enter data. Also I could not get tabbing between cells to work. While there may be ways to accomplish editing in a more Excel like manner with datatable I had to move fast. With 2 week SCRUM sprints, I do not have that luxury of diving deep with every tool.

Then I hit upon Zapatec (www.zapatec.com). The name still drives me crazy. But their Grid component is awesome. And they had so many samples and they matched a lot of features I needed (and matched YUI datatable) such as server side pagination, loading data via JSON (or xml). Most importantly it had a more natural (Excel-like) feel to cell editing and thankfully tabbing between cells worked. The sign of a good product is when you can integrate it into your stuff with little pain. And Zapatec grid so far has impressed me. If it holds up then eventually I will probably replace my other YUI datatables with it.

What never ceases to amaze me is the sheer power of JavaScript and all of the great components that are now coming out.

Note: I just wish Internet Explorer would have better memory management. Use any heavyweight widget library and soon you start seeing the memory footprint rise. What drives me crazy is that minimizing the browser will immediately release memory (which means MS could have made garbage collection more pro-active).

Acegi Security Framework
November 20, 2007 1:31 PM

Each time I looked at Acegi framework I got a headache. There is a lot of configuration. Until now I never had the opportunity to actually use it on a project.

I started out building my own security framework. A Webwork interceptor that would check if user was authenticated and then do the needful. Then I realized that my filter had to block all JSP pages but not the login.jsp. So I made that change and so on. Until it reached a point where I realized this was an absolute waste of my time. So with a fresh set of eyes I went back to Acegi. I was immediately faced with the usual tons of configuration. I resisted the instinct to drop it and plowed on.

It finally worked.

I would like to point you to a few resources that helped me get it working. Of course it goes without saying that you need to visit http://www.acegisecurity.org and check docs there.

A few pointers. Do not remove the anonymousProcessingFilter. Thats what allows not-yet-logged in users to get to your login page. Without it all resources including your login page could become secure. Now thats probably not what you want I am sure. An application thats so secure no one can get to even the gate.

Also in the web.xml I prefer explicitly configuring the URL patterns to apply security to. Check out the javaworld article for sample mappings. Most cases you do not want to apply filters for images, javascript or css files. If you need that then by all means map /* to the filter.

Finally make sure to either reuse or read the details in the login page provided in the Acegi sample war. The login page is configured in the applicationContext-acegi-security.xml as accessabile by anonymous users (not-logged-in-yet).

I am purposely not going into the actual details since between the resources listed above you should be able to get your information.

Finally here are my configuration files so you can refer:
Finally my user.properties contains sample userids and passwords and user roles. I am using the in-memory DAO for now...implementing a database DAO will be a later step for me...and thats the easy work).
admin=admin,ROLE_ADMIN
testuser=testpassword,ROLE_USER

Once the configuration is in place access to the web site will take you to the login page and things should work as expected. After this I was able to use the Acegi jsp taglibs to implement some basic role based authorization. In my case show certain links only if user is of certain roles.
<authz:authorizeifAnyGranted="ROLE_ADMIN">
some role specificcontent here
</authz:authorize>

Choice of Java frameworks (jdk 1.4)
November 14, 2007 6:58 AM

Its amazing how much work there is when wiring up a new J2EE application from ground up. Deciding on the frameworks to use is one challenge and then wiring them up so they all work seamlessly together is another. JBoss Seam has indeed tried to address this very issue and I wish them all the luck.

I am stuck in the JDK 1.4 world on this project. For this project I decided to use:
  • WebWork for web UI framework. Migration to Struts2 will be easier later when the project moves to JDK 1.5 (if ever).
  • Sitemesh - instead of tiles I preferred this one. Less configuration and works great for my needs.
  • Spring + Hibernate (and jdbc where needed) on the business logic tier.
  • Lastly I am using YUI for certain widgets such as datatable and calendar.
I wanted to go with GWT but decided against that. Though I personally think (as i have blogged earlier) that Java based web UI frameworks are the way to go in the future.

The application I am working on is not an AJAX rich client (and does not need to be) but some of the widgets in YUI fit nicely with my needs. The datatable is an amazing widget. I have configured it with server side pagination and I am extremely pleased with it. The one thing that stands out about YUI and the rest of the AJAX frameworks is their documentation. Often you can just copy and paste sample code and tweak it to your needs. But beware you need to get your Javascript skills ready for this. The more I see Javascript the more I wish I was coding in JavaScript on the server side. Its got a basic syntax closer to Java so thats a plus to me. Other than that its a different animal completely.

A few other points to note on my current effort:
  • The Spring aop support in the configuration xml is extremely useful. Use it to your advantage.
  • I have my Webwork actions being managed by Spring. This allows me to intercept the action classes in Spring and then do some work there; like exception handling. On the way out of the action classes I expect any application exceptions to be logged in a log file and also messages converted to full messages from resource bundles

Scrum vs. Traditional Project Management
September 25, 2007 4:53 AM

Scrum vs. Traditional Project Management

Recently I signed up for a SCRUM certification class and that got me thinking about my other effort which is to get PMP certified. PMP"ians" can boast that their certification is industry recognized and achieved only after giving a certification exam. SCRUM certified professionals are "certified" after they attend an approved certification class conducted by a Scrum certified trainer. No test required.

Now the question arises, is Scrum certification of any use if you did not give an exam. It will be the easiest certification I have ever earned. But the aim of this certification is not to be certified, but to gain the knowledge of the framework that it defines.

There was this one project I was on, where the Project Manager was completely at his wits end on how to run the project. There was chaos during the development stage due to various reasons. The PM had, at his disposal, a highly competent and motivated team. The architect on the project stepped up and put in place a SCRUM-like process (unintentionally) to analyze daily progress and get things going. All the while the PM being out-of-sync. Not having done any software development the PM had great difficulty grasping the scale of the effort and the pace at which work was being done. But lo-and-behold on the subsequent release he was a great PM. The moment things became a little more controlled he was able to jump in and take control. It was an interesting experience in hindsight. The first release could have ended in a disaster with the situation we had. The non-certified-architect who led the unintentional-scrum-style-development saved the day (or the project in this case). This is not to blame the particular PM but to simply to illustrate a point. And the point is that taking certifications does not make one a better PM or CSM.

Project management as we know it today, has its roots from the manufacturing/auto/construction industry. Software Project Management got its management roots from the same place. Sure over the years they have evolved to take into consideration the unique nature of software development.

But software development is a completely different animal as compared to manufacturing or construction related projects. With the latter it is possible to measure tangible progress and quality due to the high level of automation and design that already exist. Thats easier said than done in software projects. It is possible but requires a high degree of commitment from management and some really good people and good processes in place. Oh yes and when you have all of that how do you make sure your project actually delivers something in a reasonable time frame.

Each and every software project (or product) is unique. Even a simple shopping cart software may have a hundred variations depending on specific client needs. The other challenge in software is that the tools at hand to build it are always changing (language, hardware, development processes), etc. Last but not the least people building the software affect the software being built in larger ways than in the construction or manufacturing industries. software has become a intellectual game (I do not mean Einstein like intellect here). And this forces a lot of people-egos into the project. How often have you met those who say (and behave) that they know the best way to build something. Ask 10 developers how to build a shopping cart and they will come with 10 different ways, each confident that his way is the best way. Often they will give you the solution without evening waiting for requirements. If you ask for requirements that can at times be seen as a weakness not a strength. Have you met the lone warrior who believes and behaves like he is Gods' gift to the software industry. The lone warrior cares not for the rest (and thereby the project). How do you manage such a resource? And add to this all the ego-centric-corporate-political battles that are fought among the different participants (and stakeholders) of a project.

Also nowadays a B.S in CS or a IT related Masters means not much, since everyone can pick up a book and a computer and learn how to program. Is a degree-laden PMP (or CSM) better than a non-degree-laden one?

There are too many variations in software development that force certifications such as CSM or PMP to be quite irrelevant except for communicating that the individual has a certain set of knowledges in the area of the certification. Thats it. The success of a Project Manager or SCRUM Master lies in the interest the individual has in that role, how well he can execute and deliver on that interest and finally how well he interacts with people.

There are those who abuse processes due to their utter lack of any understanding of the very thing they try to spread. I was once on a SCRUM project where the powers-that-be decided one fine day to embrace SCRUM and the goal was to build small teams and tell each team one or two lines of what needs to be done and let them figure out everything else. Pray how is the team, without access to any business user, to predict requirements. Before the shit hits the fan he or she has left the project or the company for bigger titled positions elsewhere.

So long live CSM and long live PMP. I for one do not like the title Project Manager (seems weighty and sometimes an excuse to have a chip on the shoulder) or the title SCRUM Master (master of what). The right title is Software Development Manager. This title better reflects the responsibilities of the job.


Can JSON be friends with XSD?
July 30, 2007 7:55 PM

XML is widely used to represent data. XML is often talked of as a human readable format. Yes it is readable. But how often do we need to do that. Use it for its worth.... data representation.

XML Schema files are used to represent the elements that can be contained in an XML file, including data element names, data structures, types, sequence, etc. Given an XSD file it is easy to use available tools and API's to verify correctness of a document (validity and well form"ness").

JSON (JavaScript Object Notation) is another data representation and exchange format. JSON is receving some attention nowadays due to the AJAX speed train. When you think AJAX there are two types of clients:
  1. AJAX-lite clients. These web pages use XMLHttpRequest or some wrapper framework to make asynchronous calls to the server. They receive back HTML responses which the client then inserts into specific locations in the web page. These applications may sometimes use widget libraries to enhance user experience.
  2. RIA-AJAX. The asynchronous nature still exists. Always uses rich widget libraries (such as dojo, YUI, backbase, tibco GI, etc). But here the event handling and data manipulation is at the widget level (a widget being anything from a text box to a panel or window).
In the RIA-AJAX applications the browser page never refreshes. The client side RIA-AJAX framework provides support in building applications similar to a traditional Windows desktop application. In this scenario the communication between client and server is not HTML. It is some form that allows representing data. XML is one option and JSON is another. JSON support is built into most browsers. Here is a JSON sample.

{
    "phone":{
        "areaCode":703,
        "number":777000
     },

    "age":5,
     "name":"mathew"
}

The XML would be
<root>
   <phone>
       <areaCode>703</areaCode>
       <number>777000</number>
   </phone>
   <age>5</age>
   <name>mathew</name>
</root>

Both formats represent the same data. XML is a little more verbose. For larger files XML is fatty.

It is entirely possible for a web application to send out JSON formatted data to an AJAX client. It is possible to use JSON in a non-UI application too. In either case we need some way to serialize and deserialize JSON to and from Java. Serializing from Java objects to JSON is relatively simpler (note I say relatively). The real challenge is in deserializing incoming JSON to Java objects. I have written some basic Java-to-JSON serializer (supports basic types, wrapper types, complex java objects and arrays of simple type). But have not tried the other way yet.

Also I wonder if we can use XSD to represent the grammer and rules inside a JSON data file. I see no reason why not. Has anyone tried this. Appreciate any pointers. For example given a JSON data file and XSD is there something that can validate the JSON data file?

CruiseControl + ClearCase + Maven
July 30, 2007 7:14 PM

Recently I set up CruiseControl with ClearCase. While there are a few postings around the blogosphere that cover some of this, I thought I'd blog it here also. More information, the better is for others to find it.

Refer to my earlier email on CruiseControl Setup for basic CruiseControl instructions. Below are the changes you need to make to get it working with ClearCase (with Maven build). Of course its up to you to actually install the ClearCase client and make sure you are able to checkout code outside of CruiseControl.

Below is the updated project configuration file.

<project name="myproject" buildafterfailed="true">
    <plugin name="clearcase" classname="net.sourceforge.cruisecontrol.sourcecontrols.ClearCase"/>
 
    <listeners>
      <currentbuildstatuslistener file="logs/myproject/status.txt"/>
    </listeners>

    <bootstrappers>
    </bootstrappers>

    <!-- Defines where cruise looks for changes, to decide whether to run the build -->
    <modificationset quietperiod="10">
       <!--ucm stream="dev" viewpath="C:\projects\dev\myproject" contributors="true"/-->
       <clearcase branch="dev" viewpath="C:\projects\dev\myproject" recursive="true"/>
    </modificationset>

    <!-- Configures the actual build loop, how often and which build file/target -->
    <schedule interval="1800">
      <maven2 mvnscript="C:\tools\maven-2.0.7\bin\mvn.bat" pomfile="C:\projects\dev\myproject\pom.xml" goal="scm:update | clean test">
          <property name="VIEW_HOME" value="C:\projects\dev"/>
          .... other properties to pass to maven ...
      </maven2>
    </schedule>

     <log dir="logs/myproject" encoding="UTF-8">
    </log>

    <publishers>
        <currentbuildstatuspublisher file="logs/myproject/buildstatus.txt"/>
        <artifactspublisher dir="checkout/myproject/report" dest="artifacts/myproject"/>

        <htmlemail mailhost="mailserver.yourcompany.com"
                returnaddress="buildmanager@yourcompany.com"
                reportsuccess="fixes"
                subjectprefix="myproject Build Results"
                buildresultsurl="http://yourcompany.com:12000/cruisecontrol/buildresults/myproject"
                skipusers="false" spamwhilebroken="false"
                css="webapps/cruisecontrol/css/cruisecontrol.css"
                xsldir="webapps/cruisecontrol/xsl"
                logdir="logs/myproject">
                        <success address="devmailinglist@yourcompany.com"/>
                        <failure address="devmailinglist@yourcompany.com"/>
        </htmlemail>
    </publishers>

  </project>

I am not going to explain the file in any detail. Things should be self-explanatory. With Maven I needed to do one more addition to my pom.xml

<scm>
      <connection>scm:clearcase:load c:/projects/dev</connection>
</scm>

Hopefully this helps someone out there. To give credit where credit is due. I did find Simon's Blog entry helpful.

XFire WebService With Spring
May 7, 2007 3:34 PM

Tried setting up XFire with Spring and thought I'd share that experience. One more place to come for this information will not hurt ah!

Once again I used Maven to build my test application. At the bottom of this article you will find a download link for the entire application.

I have used Axis in the past and wanted to try out some other frameworks. At the same time I absolutely needed the framework to support JSR 181 (web service annotations), required the framework to integrate with Spring and relatively simple configuration. Oh also I did not want to write any WSDL. This example is an RPC based web service (unlike my previous article on document based web service with Spring-WS). I will after this article also start using Axis2, since I have been an Axis fan for many years.

JSR 181 is important to me. I think annotations are the right way to go for most simple tasks that do not require a lot of input. The web service annotations are good example about where annotations are the right fit. I have seen examples of annotations where it would be far easier and clearer to put it into the XML style configuration. Some folks are anti-annotations and I think that attitude is not the best. Use it where it makes the most sense.

Lets view the echo service java POJO code.
package com.aver;

public interface EchoService {
    public String printback(java.lang.String name);
}

package com.aver;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebResult;
import javax.jws.WebService;

@WebService(name = "EchoService", targetNamespace ="http://www.averconsulting.com/services/EchoService")
public class EchoServiceImpl implements EchoService {

    @WebMethod(operationName = "echo",action = "urn:echo")
    @WebResult(name = "EchoResult")
    public String printback(@WebParam(name ="text") String text) {
        if (text== null || text.trim().length() == 0) {
           return "echo: -please provide a name-";
        }
       SimpleDateFormat dtfmt = new SimpleDateFormat("MM-dd-yyyy hh:mm:ss a");
        return "echo: '" + text + "' received on " +dtfmt.format(Calendar.getInstance().getTime());
    }
}

As you can see above I have made liberal use of JSR 181 web service annotations.
  • @WebService declares the class as exposing a web service method(s).
  • @WebMethod declares the particular method as being exposed as a web service operartion.
  • @WebParam gives easy-to-read parameter names which will show up in the auto-generated WSDL. Always provide these for the sake of your consumers sanity.
  • Also you can see that the java method is named 'printback' but exposed as name 'echo' by the @WebMethod annotation.
Here is the web.xml.
<?xmlversion="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app
    PUBLIC "-//Sun Microsystems, Inc.//DTDWeb Application 2.3//EN"
   "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
    <context-param>
       <param-name>contextConfigLocation</param-name>
       <param-value>
          classpath:org/codehaus/xfire/spring/xfire.xml
          /WEB-INF/xfire-servlet.xml
       </param-value>
    </context-param>

    <listener>
       <listener-class>
          org.springframework.web.context.ContextLoaderListener
       </listener-class>
    </listener>

    <servlet>
       <servlet-name>XFireServlet</servlet-name>
       <servlet-class>
          org.codehaus.xfire.spring.XFireSpringServlet
       </servlet-class>
    </servlet>
    <servlet-mapping>
       <servlet-name>XFireServlet</servlet-name>
       <url-pattern>/servlet/XFireServlet/*</url-pattern>
    </servlet-mapping>
    <servlet-mapping>
       <servlet-name>XFireServlet</servlet-name>
       <url-pattern>/services/*</url-pattern>
    </servlet-mapping>
</web-app>

The web.xml configures the 'XFireSpringServlet' and sets up the Spring listener. Straightforward.

Finally here is the xfire-servlet.xml (this is our spring configurationfile).
 
<?xmlversion="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://www.springframework.org/schema/beans
">www.springframework.org/schema/beans/spring-beans-2.0.xsd">

    <bean id="webAnnotations"
       class="org.codehaus.xfire.annotations.jsr181.Jsr181WebAnnotations"/>

    <bean id="jsr181HandlerMapping"xfire_echo.jar

       class="org.codehaus.xfire.spring.remoting.Jsr181HandlerMapping">
       <property name="typeMappingRegistry">
           <refbean="xfire.typeMappingRegistry" />
       </property>
       <property name="xfire" ref="xfire" />
       <property name="webAnnotations" ref="webAnnotations" />
    </bean>

    <bean id="echo"class="com.aver.EchoServiceImpl" />
</beans>
  • Sets up the xfire bean to recognize jsr 181 annotations.
  • Last bean is our echo service implementation bean (withannotations).
That is it. Build and deploy this and you should see the WSDL at http://localhost:9090/echoservice/services/EchoServiceImpl?wsdl.

Click here to download the Maven based project code. To build run:
  • mvn package
  • mvn jetty:run

Spring-WS
May 6, 2007 7:34 PM

Took a look at Spring-WS and came up with a quick example service to describe its use. I decided to build an 'echo' service. Send in a text and it will echo that back with a date and time appended to the text.

After building the application I saw that Spring-WS comes with a sample echo service application. Oh well. Since I put in the effort here is the article on it.

Spring-WS encourages document based web services. As you know there are mainly two types of web services:
  • RPC based. 
  • Document based.
In RPC you think in terms of traditional functional programming. You decide what operations you want and then use the WSDL to describe the operations and then implement them. If you look at any RPC based WSDL you will see in the binding section the various operations.

In the document based approach you no longer think of operations (their parameters and return types). You decide on what XML document you want to send in as input and what XML document you want to return from your web service as a response.

Spring-WS encourages a more practical approach to designing document based web services. Rather than think WSDL, it pushes you to think XSD (or the document schema) and then Spring-WS can auto-generate the WSDL from the schema.

Lets break it up into simpler steps:
  1. Create your XML schema (.xsd file). Inside the schema you will create your request messages and response messages. Bring up your favorite schema edit or to create the schema or write sample request and response XML and then reverse-engineer the schema (check if your tool supports it).
  2. You have shifted the focus onto the document (or the XML). Now use Spring-WS to point to the XSD and set up a few Spring managed beans and soon you have the web service ready. No WSDL was ever written.
Spring-WS calls this the contract-first approach to building web services.

Lets see the echo service in action. You will notice that I do not create any WSDL document throughout this article.

BusinessCase:
Echo service takes in an XML request document and returns an XML document with a response. The response contains the text that was sent in, appended with a timestamp.


RequestXML Sample:
<ec:EchoRequest>
<ec::Echo>
<ec:Name>Mathew</ec:Name>
</ec:Echo>
</ec:EchoRequest>

The schema XSD file for this can be found in the WEB-INF folder of the application (echo.xsd).


ResponseXML Sample:
<ec:EchoResponse>
<ec:EchoResponse>
<ec:Message>echoback: name Mathew received on 05-06-2007 06:42:08PM</ec:Message>
</ec:EchoResponse>
</ec:EchoResponse>

The schema XSD file for this can be found in the WEB-INF folder of the application (echo.xsd).

If you inspect the SOAP request and response you will see that this XML is whats inside the SOAP body. This is precisely what is document based web services.


EchoService Implementation:
Here is the echo service Java interface and its related implementation. As you can see this is a simple POJO.
package echo.service;

public interface EchoService {
public String echo(java.lang.String name);
}
package echo.service;

import java.text.SimpleDateFormat;
import java.util.Calendar;

public class EchoServiceImpl implements EchoService {

public String echo(String name) {
if (name == null || name.trim().length() == 0) {
return "echo back: -please provide a name-";
}
SimpleDateFormat dtfmt = new SimpleDateFormat("MM-dd-yyyy hh:mm:ss a");
return "echo back: name " + name + " received on "
+ dtfmt.format(Calendar.getInstance().getTime());
}

}


Now the Spring-WS stuff:

Here is the web.xml..
    <display-name>Echo Web Service Application</display-name>

<servlet>
<servlet-name>spring-ws</servlet-name>
<servlet-class>org.springframework.ws.transport.http.MessageDispatcherServlet</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>spring-ws</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>


Only thing to note in the web.xml is the Spring-WS servlet.

Next is the all important Spring bean configuration XML (spring-ws-servlet.xml).
    <bean id="echoEndpoint" class="echo.endpoint.EchoEndpoint">
<property name="echoService"><ref bean="echoService"/></property>
</bean>

<bean id="echoService" class="echo.service.EchoServiceImpl"/>

<bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootQNameEndpointMapping">
<property name="mappings">
<props>
<prop key="{http://www.averconsulting.com/echo/schemas}EchoRequest"
>echoEndpoint</prop>
</props>
</property>
<property name="interceptors">
<bean
class="org.springframework.ws.server.endpoint.interceptor.PayloadLoggingInterceptor"
/>
</property>
</bean>

<bean id="echo" class="org.springframework.ws.wsdl.wsdl11.DynamicWsdl11Definition">
<property name="builder">
<bean
class="org.springframework.ws.wsdl.wsdl11.builder.XsdBasedSoap11Wsdl4jDefinitionBuilder">
<property name="schema" value="/WEB-INF/echo.xsd"/>
<property name="portTypeName" value="Echo"/>
<property name="locationUri" value="http://localhost:9090/echoservice/"/>
</bean>
</property>
</bean>

  • Registered the 'echoService' implementation bean.
  • Registered an endpoint class named 'echoEndpoint'. The endpoint is the class that receives the incoming web service request. 
  • The endpoint receives the XML document. You parse the XML data and then call our echo service implementation bean.
  • The bean 'PayloadRootQNameEndpointMapping' is what maps the incoming request to the endpoint class. Here we set up one mapping. Anytime we see a 'EchoRequest' tag with the specified namespace we direct it to our endpoint class.
  • The 'XsdBasedSoap11Wsdl4jDefinitionBuilder'class is what does the magic of converting the schema XSD to a WSDL document for outside consumption. Based on simple naming conventions in the schema (like XXRequest and XXResponse) the bean can generate a WSDL. This rounds up the 'thinking in XSD for document web services' implementation approach.  Once deployed the WSDL is available at http://localhost:9090/echoservice/echo.wsdl.
Finally here is the endpoint class. This is the class, as previously stated, that gets the request XML and can handle the request from there.
package echo.endpoint;

import org.jdom.Document;
import org.jdom.Element;
import org.jdom.Namespace;
import org.jdom.output.XMLOutputter;
import org.jdom.xpath.XPath;
import org.springframework.ws.server.endpoint.AbstractJDomPayloadEndpoint;

import echo.service.EchoService;

public class EchoEndpoint extends AbstractJDomPayloadEndpoint {
private EchoService echoService;

public void setEchoService(EchoService echoService) {
this.echoService = echoService;
}

protected Element invokeInternal(Element request) throws Exception {
// ok now we have the XML document from the web service request
// lets system.out the XML so we can see it on the console (log4j
// latter)
System.out.println("XML Doc >> ");
XMLOutputter xmlOutputter = new XMLOutputter();
xmlOutputter.output(request, System.out);

// I am using JDOM for my example....feel free to process the XML in
// whatever way you best deem right (jaxb, castor, sax, etc.)

// some jdom stuff to read the document
Namespace namespace = Namespace.getNamespace("ec",
"http://www.averconsulting.com/echo/schemas");
XPath nameExpression = XPath.newInstance("//ec:Name");
nameExpression.addNamespace(namespace);

// lets call a backend service to process the contents of the XML
// document
String name = nameExpression.valueOf(request);
String msg = echoService.echo(name);

// build the response XML with JDOM
Namespace echoNamespace = Namespace.getNamespace("ec",
"http://www.averconsulting.com/echo/schemas");
Element root = new Element("EchoResponse", echoNamespace);
Element message = new Element("Message", echoNamespace);
root.addContent(message);
message.setText(msg);
Document doc = new Document(root);

// return response XML
System.out.println();
System.out.println("XML Response Doc >> ");
xmlOutputter.output(doc, System.out);
return doc.getRootElement();
}
}
This is a simple class. Important point to note is that it extends 'AbstractJDomPayloadEndpoint'. The 'AbstractJDomPayloadEndpoint' class is a helper that gives you the XML payload as a JDom object. There are similar classes built for SAX, Stax and others. Most of the code above is reading the request XML using JDOM API and parsing the data out so that we may provide it to our echo service for consumption.

Finally I build a response XML document to return and thats it.

Download the sample Application:
Click here to download the jar file containing the application. The application is built using Maven. If you do not have Maven please install it. Once Maven is installed run the following commands:
  1. mvn package (this will generate the web service war file in the target folder).
  2. mvn jetty:run (this will bring up Jetty and you can access the wsdl at http://localhost:9090/echoservice/echo.wsdl.
  3. Finally use some web service accessing tool like the eclipse plug-in soapUI to invoke the web service.
As you can see this is relatively simple. Spring-WS supports the WS-I basic profile and WS-Security. I hope to look at the WS-Security support sometime soon. Also interesting to me is the content based routing feature. This lets you configure which object gets the document based on the request XML content. We did the QName based routing in our example but the content based parsing is of greater interest to me.

While I could not find a roadmap for Spring-WS, depending on the features it starts supporting this could become a very suitable candidate for web service integration projects. Sure folks will say where is WS-Transactions and all of that, but tell me how many others implement that. I think if Spring-WS grows to support 90% of what folks need in integration projects then it will suffice.

Being Agile with FDD Process
March 7, 2007 11:58 AM

Feature Driven Development (FDD)

There are so many development methodologies out there; each promising to solve our software development nightmares. Each promising to make software development easier. I will not even go on to suggest that there is any one or two methodologies that work.

The methodology you choose depends on:
  • Your organizations experiences.
  • Your team dynamics and skills.
  • And of course how much confidence you have on the selected methodology.
Often I have worked with teams that follow home-grown methodologies that are lite and tuned to what that organization believes is the best way for them to get work done. Fair enough.

I worked on one project a while back that used the Feature Driven Development (FDD) agile process. Personally to me the success of a methodology depends on how simple it is to understand. Software development happens in three main areas: requirement gathering, coding it and testing. Everything else is done to support these three related tasks. FDD is a relatively simple process; which is what attracted me to it. And I am living proof that it worked when we used it. Though in hindsight we did not formally follow every step as prescribed.

So what is FDD. FDD is an agile iterative development process. The dictionary defines agile as 'moving quickly and lightly'. When you hear agile look for that meaning. If you look in your process and you do not see that then you are not in an agile process. Seems obvious ah! You need to change your mindset and work culture to fit agile. Organizations that are not agile often have difficulty moving towards agility. Then there are those that are agile since many years before the buzzword agile development even came along.

Being agile to me does not mean you do away with traditional requirements gathering procedures and mechanisms. You still have to gather detailed requiurements. I often see Ruby On Rails developers prescribe that framework as the framework to choose for agile development of web projects. Look up on that on your own. They have a childish view of the development world where there is no corporate politics, there is no need to gather requirements upfront (gather as you code...best of luck), the business users seem to have all the time to sit with the developers, etc. The real world is often quite different. Dont get me wrong. I like Ruby and Ruby On Rails. I think Java has a lot to learn from there.

FDD is divided into 5 main activities. Each activity encompasses a set of steps that seem so obvious in software development. It does not do away with requirements gathering or any other step. It just places them in the right activity. Here are the five activities and what I think are some of the things
  • Develop Overall Model: Work with analysts/business users to get the application domain model defined. Based on the complexity of the application you may divide the project into many sub-domains and create a domain model for each area. You would gather high level requirements and may even be doing some use cases in this phase, though I doubt that will happen this early. Lot of interaction with business users during this phase.
  • Build Feature List: Work with project management to define and create the feature list for the current release. The feature list should reflect business users priorities. Features should be as small as possible to implement in terms of time.
  • Plan By Feature: Organize your features in terms of priority (related to client needs as well as development). For every feature we capture LOE (level of efforts). Drop your features into interations, each no longer than 4 to 6 weeks in duration. LOE should include design, coding and testing (if possible include reviews). Plan for dropping each feature into a QA test environment. For subsequent iterations (after the first) keep aside some time to fix defects and changes to earlier iteration. You can even increase by a week or so subsequent iterations. This filler time you can adjust as needed based on outstanding defect count and change requests.
  • Design By Feature: Developers get your hands dirty now. Draw up class diagrams, sequence diagrams for current iteration.
  • Build By Feature: Code it and drop iteration into QA enviornment.
One step I feel is missing is a 'Test By Feature'. This is so that the QA teams work can get recognized as a critical part of development and can be planned in advance.

AJAX roundup
February 15, 2007 6:41 AM

Recently I have been playing with various AJAX frameworks, both open source and commercial. For simple 'update a certain portion of the page only' type of applications you can roll up your sleeve and deal with the XMLHttpRequest (XHR) directly or use open source API's like prototype/dojo to simply make the call.

The real AJAX experience is achieved when you use the asynch call feature provided by XHR in combination with very rich GUI widgets to truly give your users a rich user experience on the web. This is where the challenge of AJAX lies. For too long developers (including myself) have used JavaScript to do very simple tasks. Keeping in-memory state on the browser was never a design detail. I think for long we have looked at JavaScript as a simple client side scripting language. On a recent project I used JavaScript in an 'object oriented way' using JSON to represent objects and actively manage user data in the browser. We had another JavaScript guru who whipped out a boatload of JavaScript widgets.  Hindsight being 20-20 we would have been better off with an open source or commercial framework. But 18 months back there were not too many good options. We did use the open source prototype library extensively.

Getting back to the main point I do not think a lot of the developer community realize how exciting the AJAX RIA development is going to be. I consider myself an end-to-end developer but often focused on backend. The backend stuff (with Spring, Hibernate, EJB, etc) is quite mature and the web 2.0 is where the excitement is. For those who are technologists it is where you want to at least spend some time. The web developers who create HTML pages of the traditional web 1.0 applications need to update their skills and approaches drastically to be successful in the web 2.0 world. Developers with OO language experience and at least basic JavaScript knowledge are very well positioned to do AJAX RIA development.

Like anything on the server side of things, the fun (or pain) aspect will be directly tied to the framework you use for AJAX RIA development. On the commercial side I looked at JackBe, Backbase and TIBCO General Interface. Each of them comes equipped with an IDE to do development. The most impressive IDE was TIBCO General Interface. You unzip the source and point your browser to one html file and voila you are in an IDE all within the browser. Pretty amazing when you realize they may have used many of their own components to build the IDE. Very similar situation with JackBe. The thing I did not like about JackBe was you need to start a tomcat instance and then point to that URL to get to your browser-based IDE. And also JackBe's components and API's were quite cryptic with 3 letter function names and no packaging of the various classes. Also various components were not feature rich. But I think as with any product they will evolve. As for the others, I looked at them and had mixed reactions. The real unknown to me about these commerical tools is; what will be my experience if I need to get out of the IDE and deal with the code directly. Thats where open source wins.

On the open source there is DOJO and YAHOO YUI. Both are very comparable in ease of use, features and widget support. Both have nicely packaged components. By packaging I mean Java like packaging of API's. Makes a developers life very easy. Where YUI wins outright is documentation. Its great. With dojo I have had to spend many hours pouring through various sites to get developer help. They have a manual and an API guide but somehow not enough or not up to the mark. Check out YUI docs and it makes any developer smile.

Then there are tools like Googles GWT and the open source Echo2 framework. Both of these allow you to build your web GUI using Java classes (like Swing). Of course the API's are not same as Swing. But with this you can now forget about building complex JavaScript and let these frameworks generate JavaScript and also map UI events to Java code. Very nice. Though I still think this is bleeding edge and little risky. But I would be game on trying this for small projects. Personally I think in the long term this is the best approach. Will that happen who knows. Right now these API's generate HTML and JavaScript. Tomorrow they can generate Flash content or even Swing. I think the possibilies are great. But only time will tell where this will go.

Note: I was quite amazed with TIBCO General Interfaces' support for Web Services. Take a look at that. The IDE can parse a WSDL and allows you to drag and drop form elements directly on WSDL elements to map them together.


Maven
January 6, 2007 11:22 PM

Can build management be made easier? Chances are whatever approach you take will have its own pitfalls and challenges. Many of the Java projects nowadays use Ant as the defacto build scripting tool. Its not too hard really. Can take a couple of days for a the Ant build file to be created (based on project needs). And subsequently it has to managed like any other source code. This approach works well. Most folks now know Ant well enough to get most tasks completed. Why then would anyone want to move to Maven!

Maven for starters makes it an easy task to manage your build file. Actually one step ahead ... it makes it easy to handle your project artifacts (code, folder structure, normal build lifecycle tasks, documentation, etc). It also plugs in with many other open source tools to get the job done through Maven. Still why Maven. Take a look at Maven Getting Started.

Now you realize how simple things could be. One can argue that we have moved some of the complexity into the pom.xml file (the project object model). There will be a small learning curve (almost insignificant) in understanding how to use maven. Still nothing that Ant cannot do.

With Ant you have to write a lot of script lines to get most work done. In maven you use plugins that are built to do specific tasks. Plugins will do tasks like compile, clean, package to correct distribution (ear, jar, etc) and so on. Even has a command to build a project site. There are a lot of plugins available. For example check AppFuse for some of its Maven plugins that are coming soon.

The power of Maven is that you can accomplish a lot by resusing plugins and not having to roll your own script every time you need something new.

That to me personally is the single biggest reason to use Maven. And I have not even mentioned dependency management yet. A lot of projects do not really need the kindof dependency management that tools like Maven provide. Maven lovers please do not bash me for saying this This feature can be really useful if your entire organization (or company) consolidates on using Maven or if you are on a project (or product development) where there is a high level of component reuse between developers and many of the components have multiple versions around. Developer A builds version 10 of a component and installs that into the Maven repository. Now others can use that as needed ot still rely on the old version till they need to move ahead.

Still if you are using Maven on a regular web application project no harm in using its dependeny management feature. Just remember to have a local repository on the network so that everyone can access the same repository and do check in the repository into your source control (subversion or cvs or whatever). Without Maven you would have colocated all of the libraries within your project folders and also checked them into source control. With Maven just because you have a central repository and no co-located libraries does not mean not checking in the libraries (in this case the repository).

Some of the features I like in Maven (other than the standard build lifecycle ones):
  • If you follow the default folder layout for the project type you are interested in then life becomes a lot more easier.
  • Commands like mvn eclipse:eclipse that will generate IDE project files from your Maven project.
  • Command mvn site will generate a project site.
  • Managing resource files (like properties files). Maven does a nice job of organizing the folders for your resource files (both main line code and test code separately) and optionally lets you do token filtering on those files.
  • The many plugins available to create different types of projects. Plain vanilla java applications to web applications. Also there are plugins for many other often used tooks such as cruisecontrol, clover, etc. Once project of that type is created you just run commands such as compile or package to get the job done.
One feature worth mentioning is how you can divide your project into separate projects (each with its own POM) and then have a top level Maven project (with its own pom.xml) that builds everything and merges dependencies between the two. Consider the scenario where you have a web application. You have all the web tier code (jsp, servlet, mvc, javascript, etc). You also have the service tier or the business tier. I would divide the two into separate independent projects, so that developers can work on them independently.

Here is how the directory structure looks like:
MyBigApp
    |
    |-- pom.xml
|
    |-- servicetier
    |       |-- pom.xml
|
    |-- webtier
    |       |-- pom.xml
|

Now you can use the top level pom to build the whole project. You can set the webtier project to pull in the dependent libraries from the servicetier project. Refer to the earlier link above to Maven quickstart. They have a sample of how to configure this.

On a recent project we tried this exact same thing using Ant. The subprojects would work just fine if you built them separately. If we used the top level Ant script we ran into some painful classpath issues. We gave up pretty soon as we were running out of time and in the end merged the two Ant files into one big one. Had we used Maven from day 1 we would have been in better shape. Which brings me to my final point. Try to use Maven from day 1. That way you can adhere to the folder structures that Maven generates by default. Retrofitting Maven into an existing project may not be a fun task. Also early on you will have to spend some extra time configuring Maven for the first time and generating a quick handy-dandy list of commands that developers (incl you) need on a daily basis.

JMX + Spring + Commons Attributes
December 20, 2006 5:01 PM

In my previous log4j blog I had used JMX to expose management interfaces to change log4j levels dynamically. If you look at the JBoss JMX console in that blog you will see that the parameter names are named as p1 and p2. Not very helpful. By default Spring uses reflection to expose the public methods of the MBean. Parameter names get thrown away once classes are compiled to byte code. No use of it further. Therefore there is no metadata available to print friendlier names in the JMX console.

I could use commons modeler project and externalize my MBean information completely to an XML file OR I can continue to use Spring and use Spring provided commons attribute annotations. Lets get straight to an example.

Note: The use of commons attributes is to attach metadata to classes or methods and have them available at runtime. If you are using Java 5 then commons attributes is not the best approach. Use Java 5 annotations since thats the standard now. Commons attributes is useful if you are still in JDK 1.4 world. Spring has Java 5 annotation equivalents for the same stuff described below.

package com.aver.jmx;

import org.apache.commons.lang.StringUtils;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;

/**
* @@org.springframework.jmx.export.metadata.ManagedResource
* (description="Manage Log4j settings.", objectName="myapp:name=Log4jLevelChanger")
*/
public class Log4jLevelChanger {
/**
* @@org.springframework.jmx.export.metadata.ManagedOperation (description="Change the log level for named logger.")
* @@org.springframework.jmx.export.metadata.ManagedOperationParameter(index=0,name="loggerName",description="Logger name")
* @@org.springframework.jmx.export.metadata.ManagedOperationParameter(index=1,name="level",description="Log4j level")
*
* Sets the new log level for the logger and returns the updated level.
*
* @param loggerName logger name (like com.aver)
* @param level level such as debug, info, error, fatal, warn or trave
* @return current log level for the named logger
*/
public String changeLogLevel(String loggerName, String level) {
// validate logger name
if (StringUtils.isEmpty(loggerName)) {
return "Invalid logger name '" + loggerName + "' was specified.";
}

// validate level
if (!isLevelValid(level)) {
return "Invalid log level " + level + " was specified.";
}

// change level
switch (Level.toLevel(level).toInt()) {
case Level.DEBUG_INT:
Logger.getLogger(loggerName).setLevel(Level.DEBUG);
break;
case Level.INFO_INT:
Logger.getLogger(loggerName).setLevel(Level.INFO);
break;
case Level.ERROR_INT:
Logger.getLogger(loggerName).setLevel(Level.ERROR);
break;
case Level.FATAL_INT:
Logger.getLogger(loggerName).setLevel(Level.FATAL);
break;
case Level.WARN_INT:
Logger.getLogger(loggerName).setLevel(Level.WARN);
break;
}
return getCurrentLogLevel(loggerName);
}

/**
* @@org.springframework.jmx.export.metadata.ManagedOperation (description="Return current log level for named logger.")
* @@org.springframework.jmx.export.metadata.ManagedOperationParameter(index=0,name="loggerName",description="Logger name")
*
* Returns the current log level for the specified logger name.
*
* @param loggerName
* @return current log level for the named logger
*/
public String getCurrentLogLevel(String loggerName) {
// validate logger name
if (StringUtils.isEmpty(loggerName)) {
return "Invalid logger name '" + loggerName + "' was specified.";
}
return Logger.getLogger(loggerName) != null && Logger.getLogger(loggerName).getLevel() != null ? loggerName
+ " log level is " + Logger.getLogger(loggerName).getLevel().toString() : "unrecognized logger "
+ loggerName;
}

private boolean isLevelValid(String level) {
return (!StringUtils.isEmpty(level) && ("debug".equalsIgnoreCase(level) || "info".equalsIgnoreCase(level)
|| "error".equalsIgnoreCase(level) || "fatal".equalsIgnoreCase(level) || "warn".equalsIgnoreCase(level) || "trace"
.equalsIgnoreCase(level)));
}
}


You will need to include an additional step in your Ant build file.
<target name="compileattr">
<taskdef resource="org/apache/commons/attributes/anttasks.properties">
<classpath refid="classpath"/>
</taskdef>

<!-- Compile to a temp directory: Commons Attributes will place Java source there. -->
<attribute-compiler destdir="${gen}">
<fileset dir="${src}" includes="**/jmx/*.java"/>
</attribute-compiler=>
</target>

This attribute compiler generates additional java classes that will hold the metadata information provided in the attribute tags in the sample code. Make sure to compile the generated source along with your normal compile.

Finally Spring must be configured to use commons attributes. Once this step is done you are in business.
  <bean id="httpConnector" class="com.sun.jdmk.comm.HtmlAdaptorServer" init-method="start">
<property name="port" value="12001"/>
</bean>

<bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean"/>

<bean id="exporter" class="com.nasd.proctor.jmx.CommonsModelerMBeanExporter" lazy-init="false">
<property name="beans">
<map>
<entry key="myapp:name=Log4jLevelChanger" value-ref="com.aver.jmx.Log4jLevelChanger" />
<entry key="myapp:name=httpConnector"><ref bean="httpConnector"/></entry>
</map>
</property>
<property name="server" ref="mbeanServer"/>
<property name="assembler">
<ref local="assembler"/>
</property>
</bean>

<bean id="attributeSource" class="org.springframework.jmx.export.metadata.AttributesJmxAttributeSource">
<property name="attributes">
<bean class="org.springframework.metadata.commons.CommonsAttributes"/>
</property>
</bean>

<bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler">
<property name="attributeSource">
<ref local="attributeSource"/>
</property>
</bean>

I will not go into any explanations here. The AttributesJmxAttributeSource and MetadataMBeanInfoAssembler beans are the ones that configure Spring to use the commons attribute generated classes and thereby the metadata is available at runtime. Take a look at the generated attribute java source and you will quickly realize what commons-attributes is doing. By default Spring uses org.springframework.jmx.export.assembler.SimpleReflectiveMBeanInfoAssembler which uses reflection to expose all of the public methods as JMX attributes/operations. With commons-attributes you can pick and choose which methods get exposed.

The only other thing to note; In the XML configuration above I start a JMX server (from Sun). Whether you want to use Sun's reference JMX console or use a commercial tool (or open source tool likeJManage) is your choice. Similarly I chose to create my own MBeanServer. You can tag along with your containers MBean Server if you prefer.



Changing Log4j logging levels dynamically
December 13, 2006 8:06 PM

Simple problem and may seem oh-not-so-cool. Make the log4j level dynamically configurable. You should be a able to change from DEBUG to INFO or any of the others. All this in a running application server.

First the simple, but not so elegant approach. Don't get me wrong (about the elegance statement) this approach works.

Log4j API
Often applications will have custom log4j properties files. Here we define the appenders and the layouts for the appenders. Somewhere in the java code we have to initialize log4j and point it to this properties file. We can use the following API call to configure and apply the dynamic update.
org.apache.log4j.PropertyConfigurator.configureAndWatch(
logFilePath,
logFileWatchDelay);
  • Pass it the path to the custom log4j.properties and a delay in milliseconds. Log4j will periodically check the file for changes (after passage of the configured delay time).
Spring Helpers
If you are using Spring then you are in luck. Spring provides ready-to-use classes to do this job. You can use the support class org.springframework.web.util.Log4jWebConfigurer. Provide it values for log4jConfigLocation, log4jRefreshInterval. For the path you can pass either one that is relative to your web application (this means you need to deploy in expanded WAR form) or provide an absolute path. I prefer the latter; that way I can keep my WAR file warred and not expanded.

There is also a web application listener class org.springframework.web.util.Log4jConfigListener that you can use in the web.xml file. The actual implementation of the Spring class Log4jWebConfigurer does the call to either:
org.apache.log4j.PropertyConfigurator.configureAndWatch 
OR
org.apache.log4j.xml.DOMConfigurator.configureAndWatch

Log4j spawns a separate thread to watch the file. Make sure your application has a shutdown hook where you can org.apache.log4j.LogManager.shutdown() to shut down log4j cleanly. The thread unfortunately does not die if your application is undeployed. Thats the only downside of using Log4j configureAndWatch API. In most cases thats not a big deal so I think its fine.

JMX Approach
JMX according to me is the cleanest approach. Involves some leg work initially but is well worth it. This example here is run on JBoss 4.0.5. Lets look at a simple class that will actually change the log level.

package com.aver.logging;

import org.apache.log4j.Level;
import org.apache.log4j.Logger;

public class Log4jLevelChanger {
public void setLogLevel(String loggerName, String level) {
if ("debug".equalsIgnoreCase(level)) {
Logger.getLogger(loggerName).setLevel(Level.DEBUG);
} else if ("info".equalsIgnoreCase(level)) {
Logger.getLogger(loggerName).setLevel(Level.INFO);
} else if ("error".equalsIgnoreCase(level)) {
Logger.getLogger(loggerName).setLevel(Level.ERROR);
} else if ("fatal".equalsIgnoreCase(level)) {
Logger.getLogger(loggerName).setLevel(Level.FATAL);
} else if ("warn".equalsIgnoreCase(level)) {
Logger.getLogger(loggerName).setLevel(Level.WARN);
}
}
}
  • Given a logger name and a level to change to this code will do just that. The code needs some error handling and can be cleaned up a little. But this works for what I am showing.
  • To change the log level we get the logger for the specified loggerName and change to the new level.
My application uses Spring so the rest of the configuration is Spring related. Now we need to register this bean as an MBean into the MBeanServer running inside JBoss. Here is the Spring configuration.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN"
"http://www.springframework.org/dtd/spring-beans.dtd">
<beans>

<bean id="exporter"
class="org.springframework.jmx.export.MBeanExporter"
lazy-init="false">
<property name="beans">
<map>
<entry key="bean:name=Log4jLevelChanger"
value-ref="com.aver.logging.Log4jLevelChanger" />
</map>
</property>
</bean>

<bean id="com.aver.logging.Log4jLevelChanger"
class="com.aver.logging.Log4jLevelChanger">
</bean>

</beans>
  • In Spring we use the MBeanExporter to register your MBeans with the containers running MBean Server. 
  • I provide MBeanExporter with references to beans that I want to expose via JMX.
  • Finally my management bean is Log4jLevelChanger is registered with Spring.
Thats it. With this configuration your bean will get registered into JBoss's MBean server. By default Spring will publish all public methods on the bean via JMX. If you need more control on what methods get published then refer to Spring documentation. I will probably cover that topic in a separate blog since I had to do all of that when i set up JMX for a project using Weblogic 8.1. With Weblogic 8.1 things are unfortunately not that straight forward as above. Thats for another day another blog.

One thing to note here is that the parameter names are p1 (for loggerName) and p2 for (level). This is because I have not provided any meta data about the parameters. When I do my blog on using JMX+Spring+CommonsAttributes under Weblogic 8.1, you will see how this can be resolved. BTW for jdk 1.4 based Spring projects you must use commons attributes tags provided by Spring to register and describe your beans as JMX beans. The initial minor learning curve will save you tons of time later.



CruiseControl Setup
December 1, 2006 1:24 AM

On a recent project I setup CruiseControl (version 2.5) as our continuous integration build tool. Some folks requested I show them how I set it up...so what better place than here.

cc
|
|--myproject-config.xml
|
|--myproject-build.xml
|
|--startcc.sh
|
|--artifacts
|        |--myproject
|
|--logs
|        |--myproject
|        
|--checkout
|        |--myproject
|
|--webapps
|        | (contains the cruise control webapps)
|

cc is my CruiseControl work directory. Call it whatever you want. 'myproject' should be replaced with your project name. The artifacts/myproject folder is where all the results of the build will be written out. The folder logs/myproject is where CruiseControl logs will be written to. These could be code coverage results, junit reports, etc. The checkout/myproject folder is where the code for your project gets checked out before a build. Never develop code from this location. The layout described above is designed for building multiple projects under CruiseControl.

Here is myproject-config.xml. This file contains all of the CruiseControl configuration. It contains things like
  • how often to check the repository for changes
  • if there are changes whats the build script to execute
  • after a build where to put all the generated artifacts (like junit reports, etc.)
  • should we send email notifications
  • ... so on ... you get the idea. Refer to CruiseControl at SourceForge.
Here is my main config.xml which simply includes project specific config.xml files:
<!DOCTYPE cruisecontrol [
<!ENTITY myproject SYSTEM "myproject-config.xml">
]>
<cruisecontrol>

<system>
<configuration>
<threads count="2"/>
</configuration>
</system>

&myproject;

</cruisecontrol>
  • Nothing much here. Simply including a project specific config file myproject-config.xml.
  • I decided to have two threads for CruiseControl. If you have multiple projects under CruiseControl management then this configuration may be of interest to you. This way sepearate project builds will not be queuing up for threads.
Here is the project specific configruation file that is included above: myproject-config.xml:
<project name="myproject" buildafterfailed="true">
<plugin name="starteam" classname="net.sourceforge.cruisecontrol.sourcecontrols.StarTeam"/>
<listeners>
<currentbuildstatuslistener file="logs/myproject/status.txt"/>
</listeners>

<!-- Bootstrappers are run every time the build runs, *before* the modification checks -->
<bootstrappers>
</bootstrappers>

<!-- Defines where cruise looks for changes, to decide whether to run the build -->
<modificationset quietperiod="10">
<starteam username="thomasm" password="" starteamurl="address:49201/view" folder="dev/myproject" />
</modificationset>

<!-- Configures the actual build loop, how often and which build file/target -->
<schedule interval="1800">
<ant antscript="ant" buildfile="myproject-build.xml" uselogger="true" usedebug="false"/>
</schedule>

<log dir="logs/myproject" encoding="UTF-8">
</log>

<publishers>
<currentbuildstatuspublisher file="logs/myproject/buildstatus.txt"/>
<artifactspublisher dir="checkout/myproject/report" dest="artifacts/myproject"/>

<htmlemail mailhost="mailserver.yourcompany.com"
returnaddress="buildmanager@yourcompany.com"
reportsuccess="fixes"
subjectprefix="myproject Build Results"
buildresultsurl="http://yourcompany.com:12000/cruisecontrol/buildresults/myproject"
skipusers="false" spamwhilebroken="false"
css="webapps/cruisecontrol/css/cruisecontrol.css"
xsldir="webapps/cruisecontrol/xsl"
logdir="logs/myproject">
<success address="devmailinglist@yourcompany.com"/>
<failure address="devmailinglist@yourcompany.com"/>
</htmlemail>

</publishers>
</project>

For details on every section please refer to CruiseControl website at sourceforge.net. Some highlights:
  • StarTeam is my repository of choice. CruiseControl comes with plugins for various repositories. I have in the past used it with Subversion. With StarTeam there is an extra step involved in set up. I will come to this in the end. If you are not using StarTeam then ignore StarTeam specific notes in this blog.
  • In the schedule section I have scheduled CruiseControl to run every 30 minutes. Check for changes and if there are changes call my delegating build script myproject-build.xml. This delegating script is responsible for checking out code and triggering the actual build for the project.
  • In the publishers section I define what gets copied into the artifacts folder. Note my project build ant script does most of the work here. The CruiseControl config above simply copies artifacts produced by it to the correct folders.
Next here is my delegating script.

<project name="myproject" default="all" basedir="checkout/myproject">

<property name="starteam.lib.dir" value="someplace/starteam/starteam-en-8.0.0-java" />

<path id="classpath">
<fileset dir="${starteam.lib.dir}"/>
</path>

<target name="all">
<taskdef name="stcheckout"
classname="org.apache.tools.ant.taskdefs.optional.starteam.StarTeamCheckout"
classpathref="classpath"/>
<stcheckout URL="starteam_address:49201/view"
username="myuser"
password="mypassword"
rootlocalfolder="checkout/myproject"
rootstarteamfolder="dev/myproject"
createworkingdirs="true"
/>

<ant antfile="build.xml" target="cc-run" />

</target>
</project>
  • Very straightforward. We first use the StarTeamCheckout task to checkout changes to the checkout/myproject folder
  • Finally we call the projects build script build.xml. In my case that script does a clean build, produces an ear file, runs the unit tests, generates junit reports, generates PMD code analyzer reports and Clover code coverage reports.
Thats it for configuration. I had an additional folder called webapps which you can copy as-is from the CruiseControl distribution. You can choose to maintain all of this setup in the CruiseControl install directory if you choose.

The startcc.sh script which starts CruiseControl is:
somepath/cruisecontrol-bin-2.5/cruisecontrol.sh -cchome somepath/cruisecontrol-bin-2.5 
-ccname projects -webport 12000 -jmxport 12001
  • Since I keep my CruiseControl work area separate from the distribution i have reference the distributions cruisecontrol.sh to start CruiseControl.
  • -webport is used to start the embedded Jetty server on prot 12000 and -jmxport to set up JMX port.
Finally I encourage you to use some monitoring client application like this firefox plugin for CruiseControl. The plugin once configured (set the URL's in the plugin to the web URL and JMX URL defined previously). The plugin will show up in the bottom right hand corner of your firefox browser.


The light is green if the build is good else red. Clicking on the lights will take you to the CruiseControl home page for your projects. Here you will see a listing of all configured projects. Clicking on any will take you the the build results page for your project. You can now view all of those generated artifacts from here.

StarTeam:
If you are using StarTeam you need to do one additional piece of set up. Refer to StarTeam Setup For CC. I will repeat the steps to illustrate how I did this. I am using StarTeam 2005 (release 2).
  • Download the source distribution of CruiseControl.
  • Find starteam80.jar from your StarTeam client install and copy it as starteam-sdk.jar to the main/lib folder in your cruisecontrol source folder.
  • Do a build to rebuild.
  • Copy the main/dist/cruisecontrol.jar to your webapps/cruisecontrol/WEB-INF/lib folder. 
  • Now you can use my startcc.sh script to start and run CruiseControl with StarTeam.
  • Again like I said before you can choose to keep the webapps folder in the same folder as the binary distribution. I just choose to keep everything in my cc folder.

Agile What !!
November 29, 2006 12:00 AM

One of the things on the hype train is agile development. We have started calling common sense development as agile development. Fair enough. We need to characterize and refer to this type of development model and the word agile fits. I am all in favor of agile development but what most people miss is that the success of an agile project not only depends on the development team embracing agility but also the business users and the client stakeholders doing the same. 

Also you have to spend time defining agility for your project. What does being Agile mean for your project? Short iterations, frequent code drops, continuous feedback loop defined, what tools will you use to support agility, etc. The list goes on. Also will you be following an agile methodology like SCRUM, FDD, etc or are you happy with a simple home grown approach. All of this defines your agility and everyone working on the project (in any level) has to understand the chosen agility approach.

Many large organizations are not technology firms and only care about getting their project done. IT is a tool that helps them perform their core business tasks more efficiently. Often these are people who are in a certain role for years and they do not want to hear any IT mumbo-jumbo. Regardless it is critical to get all the stakeholders together and explain at a high level your development strategy. You must explain to everyone what their role will be in the agile process.

Typical of agile development is to break the release into small iterations. Within (and between) an iteration the development team will make many code drops into a dev/qc environment. What comes out of that iteration is not production quality code. Idea is to get things done in a nice loop where we can incorporate feedback from everyone into the next iteration.  Of course having small iterations is not an excuse to drop code that is riddled with defects. Each iteration should have requirements, design, code, unittesting, integrationtesting, user and testing team testing.

The key to agility is adaptability. We are continuously getting feedback and incorporating it; to a point where this becomes second nature. The testing team is almost an extension of the development team in agile projects. While there should be an allocated time for formal testing between iterations, the testers should be actively testing functionality within an iteration. What gets delivered into the QC environment, within an iteration, should be some amount of incremental functionality (or bug fixes) that can be tested. Again having testers hooked in early should not be an excuse to skip unit testing. That would be really irresponsible on part of the development team.

EJB3 Interceptors
November 22, 2006 10:29 PM

Thought I'd give EJB3 interceptors a shot today. It is simple and good enough for most uses. Only class level and method level interceptors are allowed. No rich AspectJ (or jboss-aop) style support. No rich pointcut language. Spring 2.0 is much better in this regards. But like I said before, for most applications what EJB3 provides is more than enough. Even Spring prior to 2.0 had only limited AOP support (Method level).

First let me show you the stateless session bean with the interceptors configured.

package com.aver.service.timer;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import javax.ejb.Stateless;
import javax.interceptor.ExcludeClassInterceptors;
import javax.interceptor.Interceptors;

@Stateless
@Interceptors(PerformanceInterceptor.class)
public class TimeServiceBean implements TimeService
{

public String getTime()
{
SimpleDateFormat formatter = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss");
return "EJB3-SSB-" + formatter.format(Calendar.getInstance().getTime());
}

@ExcludeClassInterceptors
public String getTime2()
{
SimpleDateFormat formatter = new SimpleDateFormat("MM/dd/yyyy");
return "EJB3-SSB-" + formatter.format(Calendar.getInstance().getTime());
}
}

  • The @Interceptors annotation at the class level tells the container to apply the specified interceptor (or a list of comma separated interceptors) to all the methods in the class...except in my case for the getTime2 method which explicitly requested that all class level interceptors must be excluded during its execution (using annotation @ExcludeClassInterceptors).
  • The @Interceptors annotation can be applied at either the class level or the method level.
I am yet to find a useful use of @ExcludeClassInterceptors or its close brother @ExcludeDefaultInterceptors. @ExcludeDefaultInterceptors will as indicated exclude any default interceptors (which can only be applied via the XML configuration file).

I for one prefer the interceptors being configured outside of the bean itself. Typically the functionality in interceptors are cross cutting concerns and they should not be meddled with by individual beans. You don't want one developer excluding all interceptors and thereby conveniently skipping some security related interceptor in your application. Anyway the feature is there so we will all find a good use for this .

Now here is the interceptor implemention in PerformanceInterceptor.java.
package com.aver.service.timer;

import javax.interceptor.AroundInvoke;
import javax.interceptor.InvocationContext;

public class PerformanceInterceptor
{
@AroundInvoke
public Object measureTime(InvocationContext ctx) throws Exception
{
long startTime = 0;
try
{
startTime = System.currentTimeMillis();
return ctx.proceed();
}
finally
{
System.out.println(ctx.getTarget().getClass().getName() + "->" + ctx.getMethod() + " executed in "
+ (System.currentTimeMillis() - startTime) + "ms");
}
}
}

This should be fairly obvious. You apply the @AroundInvoke to the interceptor method to indicate that the method is the interceptor itself. Not sure if that is clean English there. You need the method signature exactly as I indicated above. Of course the method name can be different.

This is it ... now when the application executes this interceptor is applied for TimeService.getTime method only.

What I do not like about this is the use of @AroundInvoke annotaton. I prefer that the interceptor class implement an interface similar to the AOP alliance org.aopalliance.intercept.MethodInterceptor. I personally think the annotation above is not clean. But thats me...others may love it. Anyone have an opinion either way do let me know.

Lastly there is another way to implement the interceptor. In the implementation above I had defined the interceptor externally in the class PerformanceInterceptor.java. You can if you want directly implement the @AroundInvoke method in the bean class itself. I just do not like that. I feel what you put in interceptors are cross cutting features and cross cutting features are best left in their own class. Again thats me...others may love it. Anyone have an opinion either way do let me know. 

Upgraded to jboss-EJB-3.0_RC9
November 22, 2006 9:55 PM

I upgraded to the latest EJB3 update from JBoss. I switched to jboss-EJB-3.0_RC9-FD (was using RC1). My custom @InjectorService annotation implementation had to change slightly. Here is the updated class.

package com.aver.web;

import static java.lang.System.out;

import java.lang.reflect.Field;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;

import com.aver.web.annotation.InjectService;
import com.opensymphony.xwork.ActionInvocation;
import com.opensymphony.xwork.interceptor.AroundInterceptor;

public class ServiceInjector extends AroundInterceptor
{
protected void after(ActionInvocation arg0, String arg1) throws Exception
{
}

protected void before(ActionInvocation invocation) throws Exception
{
Object action = invocation.getAction();
Class clazz = action.getClass();
out.println(action.getClass().getName());
for (Field field : clazz.getDeclaredFields())
{
out.println(">>> " + field.getName());
if (field.isAnnotationPresent(InjectService.class))
{
out.println("FIELD " + field.getName() + " IS ANNOTATED WITH @InjectService -> "
+ field.getType().getSimpleName());
field.set(action, getService("myapp/" + field.getType().getSimpleName() + "Bean/local"));
}
}

}

private Object getService(String serviceJndiName)
{
Context ctx;
try
{
ctx = new InitialContext();
return ctx.lookup(serviceJndiName);
}
catch (NamingException e)
{
e.printStackTrace();
}
return null;
}
}

The JNDI name by default (if you deploy in ear format) is earfilename/beanname/local (or remote for remote interfaces). So in my case TimeService would be deployed as myapp/TimeServiceBean/local.

Once past this all is good.

EJB3 With WebWork
November 20, 2006 6:43 PM

The letters 'EJB' cause me to slouch back and say "oh no...". But after a few encouraging words to myself, I gave it a shot. If you look at my previous blog on WebWork you will see that WebWork was used as the web-tier framework and Spring was used to manage the business/sevice tier. On the service tier there existed a 'TimeService' which would (you are right) return the time of the day. One WebWork action, named Timer, would "talk" to the business tier to get the time. WebWork can be configured to inject Spring objects into the action. That is the mechanism I used to get the job done there.

Now I decided to use the exact same sample in the EJB example here. Instead of the Spring based service tier I wrote up the service tier as EJB (only stateless sessions beans). My aim was to inject the EJB service into the WebWork action. There is no Spring in this example. Spring is gone. I am neither from the Spring camp or the EJB3 camp. I am from 'lets get the job done camp'. And with EJB3 I do see a good alternate programming model. Its all dependency injection in EJB3 so it will smell and feel a little like Spring. BTW I ran this example on JBoss 4.0.5.

Here is the service interface for the time service.

package com.aver.service.timer;

public interface TimeService {
public String getTime();
}

And here is the stateless session bean implementation of the service. As you can see its clearly a POJO. No pesky interfaces and callbacks to implement. Here is where it smells and feels like Spring (or any other decent IOC engine).
package com.aver.service.timer;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import javax.ejb.Local;
import javax.ejb.Stateless;

@Stateless
@Local(TimeService.class)
public class TimeServiceBean implements TimeService {

public String getTime() {
SimpleDateFormat formatter = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss");
return "EJB3-SSB-" + formatter.format(Calendar.getInstance().getTime());
}

}

The only thing that tells me this is an EJB is the annotations. This is marked as a stateless session bean with a local interface of TimeService.

Now lets see the Timer action class in the web tier. This is the WebWork action class.
package com.aver.web;

import com.aver.service.timer.TimeService;
import com.aver.web.annotation.InjectService;
import com.opensymphony.xwork.ActionSupport;

public class Timer extends ActionSupport {
private String userName;

private String message;

@InjectService
TimeService timeService;

public String getTime() {
message = timeService.getTime();
return SUCCESS;
}

public String askForTime() {
return SUCCESS;
}

public String getMessage() {
return message;
}

public String getUserName() {
return userName;
}

public void setUserName(String userName) {
this.userName = userName;
}
}

As you can see its really simple. Wait ... what is the @InjectService annotation here. You can see that in the getTime method I call out to the service to get the time. In the WebWork+Spring blog I could depend on the WebWork+Spring integration for the service to be injected in. Not any more. Now my TimeService is an EJB. I could do away with the @InjectService and instead directly lookup the JNDI context for the EJB. But that would not be a good architectural approach. You want to consolidate the service locator feature into one place. With the introduction of annotations in Java 5 you now have a powerful way of moving such features into annotations. Note in EJB3 you can do dependency injection within and between container managed objects. The Timer class is not an EJB3 managed object. So you are on your own.

Here is the annotation.
ackage com.aver.web.annotation;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface InjectService {
}

This is a marker annotation used to mark where service injection is required at the field level. I am not a fan of setter injection for such things. Why write a setTimeService method just to have injection done there and a false sense of OO compliance?

Finally, to inject the service I wrote up a quick WebWork interceptor that looked up the JNDI context and injected the desired service. This code can be more robust, but this works.
package com.aver.web;

import static java.lang.System.out;

import java.lang.reflect.Field;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;

import com.aver.service.timer.TimeService;
import com.aver.web.annotation.InjectService;
import com.opensymphony.xwork.ActionInvocation;
import com.opensymphony.xwork.interceptor.AroundInterceptor;

public class ServiceInjector extends AroundInterceptor {
protected void after(ActionInvocation arg0, String arg1)
throws Exception {
}

protected void before(ActionInvocation invocation)
throws Exception {
Object action = invocation.getAction();
Class clazz = action.getClass();
out.println(action.getClass().getName());
for (Field field : clazz.getDeclaredFields()) {
out.println(">>> " + field.getName());
if (field.isAnnotationPresent(InjectService.class)) {
out.println("FIELD " + field.getName()
+ " IS ANNOTATED WITH @InjectService");
field.set(action, getService(field.getType().getName()));
}
}
}

private Object getService(String serviceJndiName) {
Context ctx;
try {
ctx = new InitialContext();
return ctx.lookup(serviceJndiName);
}
catch (NamingException e) {
e.printStackTrace();
}
return null;
}
}

To configure the interceptor, update the xwork.xml file as follows.
.. other stuff ...
<interceptors>
<interceptor name="appUserInterceptor" class="com.aver.web.ExecutionTimeInterceptor"/>
<interceptor name="serviceInjector" class="com.aver.web.ServiceInjector"/>
<interceptor-stack name="appInterceptorStack">
<interceptor-ref name="serviceInjector"/>
<interceptor-ref name="appUserInterceptor"/>
<interceptor-ref name="defaultStack"/>
</interceptor-stack>
</interceptors>

<default-interceptor-ref name="appInterceptorStack"/>

.. other stuff ...

Compile and deploy the application. The download jar contains the full source and the ant script to create the ear file. Now when you hit the URL localhost:9090/webworks/askfortime.jsp you will see.


Enter name and hit 'Get Time'. You should now see:




To download the code click here. The code is missing all of the external jar files for EJB3. This project was built with JBoss. The generated ear file myapp.ear will contain
- service.ejb3 (this is a jar file named with .ejb3 extension so that JBoss knows to look inside for all EJB3 beans).
- webworks.war (all of the code and libraries for the WebWork supported client tier).
- META-INF/application.xml (contains references to the web module and ejb module).

Also I had the following JBoss libraries in my eclipse/Ant classpath for compilation. Don't package these with the ear file please.
  • jboss-4.0.5.GA/server/all/deploy/ejb3.deployer/ejb3-persistence.jar
  • jboss-4.0.5.GA/server/all/deploy/ejb3.deployer/jboss-ejb3x.jar
  • jboss-4.0.5.GA/server/all/deploy/ejb3.deployer/jboss-ejb3.jar

Other than this I had the following jars for WebWork support in the war file. You can drop them in the lib folder of the project:
  • xwork.jar
  • webwork-2.2.4.jar
  • tiles-core.jar
  • rife-continuations.jar
  • oscore.jar
  • ognl.jar
  • freemarker.jar
  • commons-logging-1.0.4.jar
  • commons-digester-1.6.jar
  • commons-beanutils-1.7.0.jar
I started JBoss with run.sh -c all.

As you can see its clearly possible to build a system around EJB3 (this is for those of us for whom EJB1 and 2 left a bad taste in our mouths). For those starting new from EJB3 ... it does not matter. 

Java 6 - JavaScript Support
November 9, 2006 9:38 PM

One of the new features in Java 6 is the ability to execute code written in scripting languages such as JavaScript. Java 6 will be released with the Mozilla Rhino engine for JavaScript. Others are working on doing the same with other languages. I heard someone mention Visual Basic the other day..

Let me get straight to an example:
import static java.lang.System.out;

import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;

import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;

public class TryOutJavaScript {

public static void main(String[] args) {
new TryOutJavaScript().runInlineJSCode();
new TryOutJavaScript().runExternalJSCode();
}

void runInlineJSCode() {
ScriptEngineManager scriptMgr = new ScriptEngineManager();
ScriptEngine jsEngine = scriptMgr.getEngineByName("JavaScript");
try {
jsEngine.eval("print('Hello from inline exec!')");
out.println();
}
catch (ScriptException ex) {
ex.printStackTrace();
}
}

void runExternalJSCode() {
ScriptEngineManager scriptMgr = new ScriptEngineManager();
ScriptEngine jsEngine = scriptMgr.getEngineByName("JavaScript");
InputStream is = this.getClass().getResourceAsStream("greetings.js");
try {
Reader reader = new InputStreamReader(is);
jsEngine.put("name", "Mathew");
out.println(jsEngine.eval(reader));
out.println(jsEngine.get("name"));
}
catch (ScriptException ex) {
ex.printStackTrace();
}
}
}

Lets look at the method runInlineJSCode.
  • First we import the required classes from the new package javax.script
  • ScriptEngineManager is our entry point into the scripting module.
  • We use an instance of the ScriptEngineManager to get to a script engine represented by ScriptEngine. This can be done by specifying the language name. There are other ways to get to the ScriptEngine. I will only use this one.
  • After that you are free to do pretty much anything you want (as supported by the scripting engine of choice). For example we call the method jsEngine.eval to execute arbitrary lines of code against the scripting engine. In this case the Rhino JavaScript engine.
  • If you execute the above code you will see 'Hello from inline exec!' printed out on the console.

Next we look at a slightly different scenario. We have an external JavaScript file and we want to execute some code in that file. Refer to method runInlineJSCode.
  • First we import the required classes from the new package javax.script
  • This time we get a reference to a InputStream for the external JavaScript file greetings.js.
  • Also I send in a variable 'name' into the jsEngine. In my case the JavaScript modifies the content and you will be able to access the updated value through the engine, The js code is listed below. Really simple there.
  • Now we call the method jsEngine.eval to execute the JavaScript code.
  • If you execute the above code you will see 'JavaScript says - how do u do Mathew! ' printed out on the console.
The JavaScript code in greetings.js
function Greeter()
{
}

Greeter.prototype.saySomething = function() {
name = name + "! ";
return "JavaScript says - how do u do " + name;
}

var grt = new Greeter();
grt.saySomething();

The result of executing the java code is:
Hello from inline exec!
JavaScript says - how do u do Mathew!
Mathew!

This is all well and good. I can see this feature being used constructively but also open to abuse. Here is a good scenario. Often on web applications that require credit card processing there is some code in JavaScript to perform validations on the card numbers. Rather than recode that in Java we could call out to the same JavaScript method to perform the validation.

Here is a bad scenario. Developers (or Architects) decide to integrate multiple scripting languages into the same project (JavaScript, Groovy, Ruby etc). This can be a painful architecture to maintain. I will leave it that.

Code Analyzer - PMD
November 3, 2006 10:26 PM

If you have a code review process in place, how do you make sure that the review is fruitful. The last thing you want is to spend many hours across different reviews pointing out the same thing over and over again. There are common code mistakes that we all make and it is impossible for a developer to catch them all. Thats were tools such as CheckStyle and PMD among others come into picture. These are code analyzers that will run through the code and check for conformance to a slew of good programming conventions (rules).

For example lets say you are a diligent developer who never eats exceptions. But then one day, without realizing, it happens. How do you catch it? You need a code analyzer if you want to guarantee finding the bad code.

PMD defines a whole set of rules grouped into rule sets. You can choose to apply only those rule sets of interest to your project. Of course you can add your own rules if you choose. PMD integrates with Eclipse (among other IDE's) and Ant (makes it perfect for any Continuous Integration environment). If you really care about code quality make sure to have this setup in your IDE and in Ant. One without the other is useless. Having it in your IDE forces you to write good code from the beginning. This may seem painful in the beginning but in a few days writing code that conforms to the rule will be second nature to you. Anything you miss will be caught in the reports generated by PMD (you can get HTML reports or get an XML and apply whatever style sheet you want over that).

I do not have a detailed comparison between PMD and CheckStyle. But the one difference i seem to recollect is that CheckStyle can also be configured to check for JavaDoc and code style conventions. By code style I mean things like does the braces appear on a new line vs the same line as the function, and so on. But if code quality is the main goal then PMD will suffice. I for one am not a big fan of forcing a style on everyone. It is more important to catch real code problems vs style. If you can decide on a style great, then implement that across the project. Otherwise don't bother.

One interesting feature of PMD is copy-paste detection or CPD. Need I say more about that .

To give you a flavor of some of the rule sets, here is one related to Jakarta logging. In the case where the application is logging exceptions this rule set checks that we conform to the two parameter logging API.
   try {
        // some code
   }
   catch(SomeException ex) {
        log.error("some message", ex);
        // other exception handling code
   }

If we did log.error(ex) we actually just ate the stack trace and that can be a bad thing. But I have made that mistake in the past without realizing it and then had it pointed out to me or simply realized it when the stack trace did not show up. But now we are depending on one of the two happening to find out the problem. God forbid there is a production problem, that cannot be replicated in other environments and you realized you just ate up the stack trace.

The second rule enforces that  we have a 'private static final Log' instance with the current class passed to LogFactory.getLog.

private static final Log logger = LogFactory.getLog(Account.class);

would be the right thing in the Account class. But if you changed it to

private static final Log logger = LogFactory.getLog(Customer.class)

then thats wrong. Well it will work but why would you want to log messages under Customer class when you are in Account.



WebWork
October 25, 2006 7:14 PM

Those who have used Struts know it is an action based framework. You write action classes that do certain work. You have an XML configuration file where you register the actions, the forms (beans) that the actions expect and the response mechanisms. Also you will then be painfully aware of some of the configuration headaches on large projects. One of the advantages of Struts is that it can provide a level of certainty to your web based application development model. What does that mean! For most web projects one of the existing web frameworks will suffice. Struts is well-known, well documented and relatively simple to use. You can find good people to write applications with Struts and also you will find enough good people to maintain those applications.

The power of a framework is not in how cool it is during development but how easy it is for others to pick it up and be productive.

Having said all of that about Struts I do think if you are coding a new project you should probably look at Tapestry or WebWork as your framework of choice. I blogged on Tapestry in my previous blog. So let me blog on about WebWork now. Point to remember: WebWork is the new Struts 2.0 (still not in final form). The two frameworks have joined hands to give us a new more easy to use web framework under the Struts 2.0 banner.

Meanwhile for those starting new projects now you can still use the released versions of WebWork. Just keep a watch on news about Struts 2.0. But if you look at WebWork it is immediately obvious that its a great framework. The port over to Struts 2.0 will be a lot easier if you start with WebWork. Lets begin by writing up a sample application.
  • Display a page with single form element for user name.
  • Submitting the page above will take you to a second page with a message (including the user name and a time stamp).
  • The web component will "talk" to a Spring based back end to retrieve some part of the message. In this example the business service 'TimeService' will give us the exact time of the day.
The Action Class:
Here is the action class for our little application from bullet two onwards.
package com.aver.web;

import com.aver.service.timer.TimeService;
import com.opensymphony.xwork.ActionSupport;

public class Timer extends ActionSupport {

private String userName;

private String message;

private TimeService timeService;

public String getTime() {
message = timeService.getTime();
return SUCCESS;
}

public String getMessage() {
return message;
}

public void setTimeService(TimeService timeService) {
this.timeService = timeService;
}

public String getUserName() {
return userName;
}

public void setUserName(String userName) {
this.userName = userName;
}
}
  • Action class Timer extends of ActionSupport (a helper action implementation).
  • The incoming form will contain a form element named 'userName'. We provide a setter and getter for the user name attribute. WebWork will copy the form contents into this field. Rather than provide a simple field we could also provide a bean to hold the form data. Say we have a bean named UserContext with a setter and getter for user name inside of it. We then provide setUserContext and getUserContext methods on our action above. In the HTML form we just have to make sure to name the text field as 'userContext.userName' rather than 'userName'.
  • Anything that needs injection will need a setter method. We want to inject our Spring based back end TimeService class into this action.
  • getTime(..) is the action method that the web request will invoke to get the time of the day.
  • Think of this workflow:
    • User submits form.
    • WebWork resolves (we will see how in a moment) the request to this action and to method 'getTime'.
    • getTime(..) will do the grunt work of calling the back end and doing all whats required to get the job done.
    • getTime(..) places the result into an action property 'message'. There is a 'getMessage' which can be called to get the contents. 
    • The JSP is then rendered. The JSP calls on the just executed action class to get the message for display.
    • How does the action know which JSP to render. The action method returns a SUCCESS indicator. The success indicator is mapped in the xworks.xml file to a specific JSP. Simple. We will cover the xworks.xml file a little later.

The JSP's

Here is the JSP with a form that you submit to get the time.
<%@ taglib prefix="ww" uri="/webwork" %>
<html>
<head>
<title>Get Time</title>
</head>
<body>
<form action="showTime.action" method="post">
<p>User Name:<input type="text" name="userName"></p>
<p><input type="submit" value="Get Time" /></p>
</form>
</body>
</html>
  • As you can see I have used normal HTML elements in the JSP above. You could use the form tags that come along with WebWork on a real project.
  • The form action submits to 'showTime.action'. Refer to the xworks.xml file below.
  • The text field has a name userName. This matches up with the 'setUserName' on the action class so that the value from the submitted form can be injected into the action.

The JSP file that serves up the response HTML is:
<%@ taglib prefix="ww" uri="/webwork" %>
<html>
<head>
<title>Show Time</title>
</head>
<body>
<ww:property value="userName"/>, it's now <ww:property value="message"/>
</body>
</html>
  • As you can see almost immediately is the custom tags that WebWork provides. There is a tag for almost everything you would want, including stuff like iteration.
  • The ww:property in this jsp refers to 'userName' and 'message'. They end up getting resolved to getUserName and getMessage on our action class. The action class remember has finished executing by now and returned a SUCCESS which in the xworks.xml file resolves to the jsp above. Read on for the xworks.ml file.
Interceptors:
Here is another must-know feature of WebWork. Interceptors. These are classes that can be used to intercept the request before and after processing so that you may do useful work (like apply security, do some global logging, or say attach a hibernate session to the thread, etc.). You can create any number of interceptors and stack them in order for execution. You will always end up using the predefined interceptors provided by WebWork. More when I cover xworks.xml. Ya I know this file seems to have some magic right! Be patient.
package com.aver.web;

import com.opensymphony.xwork.ActionInvocation;
import com.opensymphony.xwork.interceptor.AroundInterceptor;

public class ExecutionTimeInterceptor extends AroundInterceptor {

protected void after(ActionInvocation arg0, String arg1) throws Exception {
}

protected void before(ActionInvocation arg0) throws Exception {
}
}
  • The 'ExecutionTimeInterceptor' can do just that. Details are not essential here. I just want to introduce you to the concept of interceptors in WebWork. There is some configuration involved...hold your breath. It will all come together.
Xwork.xml:
Now for the glue that glues all of this together...the xwork.xml file.
<!DOCTYPE xwork PUBLIC "-//OpenSymphony Group//XWork 1.0//EN" 
"http://www.opensymphony.com/xwork/xwork-1.0.dtd">

<xwork>
<include file="webwork-default.xml"/>

<package name="default" extends="webwork-default">

<!-- =================================================== -->
<!-- INTERCEPTORS -->
<!-- =================================================== -->
<interceptors>
<interceptor name="appUserInterceptor" class="com.aver.web.ExecutionTimeInterceptor">
</interceptor>
<interceptor-stack name="appInterceptorStack">
<interceptor-ref name="appUserInterceptor"/>
<interceptor-ref name="defaultStack"/>
</interceptor-stack>
</interceptors>

<default-interceptor-ref name="appInterceptorStack"/>

<!-- =================================================== -->
<!-- ACTIONS -->
<!-- =================================================== -->
<action name="showTime" class="com.aver.web.Timer" method="getTime">
<result name="success">showtime.jsp</result>
</action>
<action name="askForTime" class="com.aver.web.Timer" method="askForTime">
<result name="success">askfortime.jsp</result>
</action>

</package>
</xwork>
  • Note how we define our custom interceptor (appUserInterceptor) and apply that to the app defined interceptor stack (appInterceptorStack). The 'defaultStack' is a WebWork predefined set of interceptors that do some processing for us (check their web site). We basically want to add our interceptor before the 'defaultStack'. One of the interceptors in the default stack is responsible for injecting form submission parameters into your action. This should give you a good idea of the use of interceptors.
  • With the 'default-interceptor-ref' XML element we apply this custom interceptor stack.
  • Finally we define our action class. Not much here but you will appreciate the simplicity and of course no more form bean stuff here (like Struts). Even this can be thrown out if we choose to use' XDoclet'.
  • When WebWork needs to find an action mapping this is where it comes.
  • Place this file in your web-inf/classes folder.
Spring Integration:
We need the file named webworks.properties in the web-inf/classes folder to configure Spring.
webwork.objectFactory=spring
webwork.objectFactory.spring.autoWire=type
  • webwork.objectFactory: 'spring' to allow injecting spring beans into your actions.
  • webwork.objectFactory.spring.autoWire: Spring auto wiring. I choose 'type'. Default is 'name'.
I do not go into details of the Spring backend as this blog is not about Spring. The packaged example has a sample TimeService with the following interface and a configuration file named as service-registry.xml.
package com.aver.service.timer;

public interface TimeService {
public String getTime();
}

Web.xml
Finally here is the web.xml file.
<?xml version="1.0"?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/dtd/web-app_2_3.dtd">

<web-app>
<display-name>Sample WebWork App</display-name>

<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>
classpath*:service-registry.xml
</param-value>
</context-param>

<filter>
<filter-name>webwork</filter-name>
<filter-class>com.opensymphony.webwork.dispatcher.FilterDispatcher</filter-class>
</filter>


<filter-mapping>
<filter-name>webwork</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>

<taglib>
<taglib-uri>/webwork</taglib-uri>
<taglib-location>/WEB-INF/lib/webwork-2.2.4.jar</taglib-location>
</taglib>
</web-app>

Once deployed you should be able to get to the action directly by
http://localhost:7001/webworks/showTime.action?userName=Matt

I have not show any of the Spring related configuration. I will assume that the reader knows how to do that. The complete application can be downloaded with its ant build script by clicking here. If you download the sample source code it will contain the Spring service implementation too. Ignore any Tiles related stuff in the downloaded zip. I have not completed WebWork and Tiles integration yet. Though it looks relatively simple.

Summary:
You should by now have a good idea (and a working sample) of building an end-to-end web application with WebWork and Spring. We are not configuration free yet. We have to maintain the xworks.xml file. My recommendation is to use XDoclet to generate the action details. That will greatly simplify things.

In comparison to Tapestry I did find WebWork a lot more easier to get started. Learning curve is a lot smaller. Myself included, I have seen a lot of people praising Tapestry but always being cautious regarding it's learning curve. The thing I like most about Tapestry is the fact that there are no custom tag libs. Instead they throw all that is required as part of the standard HTML tag vocabulary. That makes the page easily editable in any HTML editor and also easier to work with between the web developer and the Java developer. Point to note is that both frameworks use OGNL as the expression language.

XSLT group-by
July 10, 2006 8:13 PM

While doing some reading (or catching up) on new features in XSLT 2.0, I came upon a very useful addition. It is possible to implement this in XSLT 1.0, though rather painfully, but it's a lot more intuitive in 2.0. Let's say we have the following XML data with employee bonus amounts:


<company>
   <employee name="Matt" quarter="q1" amount="700"/>
   <employee name="Matt" quarter="q2" amount="200"/>
   <employee name="Matt" quarter="q1" amount="300"/>
   <employee name="SamosaMan" quarter="q1" amount="400"/>
   <employee name="SamosaMan" quarter="q2" amount="60"/>
</company>

I would like to display an HTML page to summarize each employee's bonus amounts grouped by quarters (q1, q2, etc).

 

<xsl:for-each-group select="company/employee" group-by="@name">
   <tr>
      <td><xsl:value-of select="@name"/></td>
      <td>
         <table>
             <xsl:for-each-group select="current-group()" group-by="@quarter">
               <tr>
                  <td><xsl:value-of select="@quarter"/>=<xsl:value-of select="sum(current-group()/@amount)"/></td>
               </tr>
             </xsl:for-each-group>
         </table>                 
      </td>
   </tr>
</xsl:for-each-group>

 

  1. First we use the tag xsl:for-each-group and select the initial list of employees with select clause as 'company/employee'.
  2. We also tell the transformer engine to group the resulting sequence of nodes using group-by='@name'.
  3. With this we now have a sequence of employee nodes grouped by the employee name.
  4. Now we can print the name of each employee. But we need to go one level deeper and summarize their bonus amounts by quarter. As you can see these employees are lucky in that they get multiple bonuses in the same quarter.
  5. Now for the current employee lets do another group-by against the quarter attribute with
  6. Point to note is the use of select='current-group()' which gives us a way to reference the current employee being processed. The rest should be obvious.

The display (once you fill in the blanks around the XSLT) is:


The java code to transform and print the HTML to the console:

Source xsltSource = new StreamSource(new File("transform.xsl"));
Result result = new StreamResult(System.out);
Transformer trans = factory.newTransformer(xsltSource);
trans.transform(xmlSource, result);

Last but not the least you will need a XSLT parser that supports XSLT 2.0 and XPATH 2.0. Neither of the specs are final releases yet. But Saxon has an implementation of the pre-release version out for use.

Before I stop I do want to make sure that the reader is 'aware' of how the java runtime 'knows' which XSLT engine to use. Remember JAXP. Well it was created to insulate the developers from the gory implementation details of different vendor implementations. So in our case we may want to use Apache Xalan as our XSLT processor or like in my case I used Saxon's implementation. I used Saxon cause their current release supports XSLT 2.0 on which this blog is based on.

Ok so how do we tell Java to use our choice of XSLT processor? One of three ways in order:

  1. You can pass the system property -Djavax.xml.transform.TransformerFactory=net.sf.saxon.TransformerFactoryImpl
  2. If the system property could not be located..next the JRE lib/jaxp.properties is checked for the same name value pair mentioned above.
  3. If 1 and 2 fails then the service provider mechanism is used to locate the implementation. This is what my example used. How do I know that? Check the file META-INF/services/javax.xml.transform.TransformerFactory in saxon8.jar. It contains 'net.sf.saxon.TransformerFactoryImpl'.
  4. Finally if all fails then the default implementation will be used as shipped with the JDK.

Tapestry 4
June 16, 2006 8:15 PM

Tapestry

 

Are there too many web frameworks out there? Well for those in the know the answer will be a resounding yes. Which one to pick up has become really a painful decision! Once you take one path you cannot just switch the framework mid-way. There is always the good old Struts framework. But that seems to be “oh not so fashionable nowadays”. Ah that ‘Ruby on Rails’ … and then you can sift through the web to find its java inspired half-brother. Or should we Seam with JBoss Seam? Though in all fairness JBoss Seam cannot be called just a web framework. It is a complete framework for front-end and back-end development.

 

Then of course there is the macho-man approach. Roll up your sleeves and write your own web framework. Being an independent consultant I am not too interested in creating my own web framework and leaving the client with an unsupported framework when I leave. So I will leave that option out.

 

On a brand new project what do you use? In my quest to find that answer through experimentation I decided to give Tapestry a try. I liked what I saw initially; though I got very tired of the .page files. It was possible, in most cases, to reduce them to a bare minimum. And if you are lucky to use JDK 1.5 then annotations come to the rescue. Right at the onset let me tell you one thing; there is a sharp learning curve with Tapestry. But once you get the feel of the framework things become easier and actually fun.

 

I am going to go through a simple example in this article on how to get up and running with Tapestry. Here are the use cases we will implement:

1. Display Home Page

2. Display current list of products in the Catalog.

3. Add new Product (go back to 2 after successful add).

 

Lets start with the general project setup. I am using Eclipse 3.2 with JDK 5. My project structure is:

 

Catalog

    |-src

        |-catalog.pages (page classes here)

        |-catalog.service (backend mock service here)

    |-META-INF

    |-WEB-INF

        |-lib

        |-web.xml

        |-*.page files

        |-*.html files

        |-catalog.application

 

I am using Jetty as my web container. Download Jetty (http://www.mortbay.org) and also install Jetty Launcher (http://jettylauncher.sourceforge.net). Jetty Launcher is an eclipse plug-in that allows you to run (and deploy) your application with Jetty. I leave it up to the reader to do this required setup before proceeding.

 

Here is the web.xml so you know what it is.

<web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/TR/xmlschema-1/"

xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd" version="2.4">

<display-name>Catalog</display-name>

<servlet>

<servlet-name>catalog</servlet-name>

<servlet-class>org.apache.tapestry.ApplicationServlet</servlet-class>

<load-on-startup>1</load-on-startup>

</servlet>

<servlet-mapping>

<servlet-name>catalog</servlet-name>

<url-pattern>/app</url-pattern>

</servlet-mapping>

</web-app>

 

Now lets get to Tapestry…finally! First forget about all of the other web frameworks and how they do stuff. Forget JSTL, JSF, Struts and all.

 

Now lets start thinking of our application structure based on our requirements. We need to display a home page with some welcome stuff. Ok lets then create a Home.html page with following contents.

 

1. <html>

2. <head>

3. <title>Catalog Mania</title>

4. </head>

5. <body>

6. Welcome to <span jwcid="@Insert" value="ognl:message">(some message here)</span>!

8. <br/>

9. <br/>

10. <a href="#" jwcid="@PageLink" page="Catalog">Enter Catalog 11. Mania</a>

12. </body>

13. </html>

 

What’s this jwcid thing on line 6? Tapestry calls the above html as a template. It contains both your static and dynamic content. You use special Tapestry decorations, on standard HTML tags, for the dynamic behavior. JWCID stands for Java Web Component ID. Those items in your template that are dynamic are decorated with jwcid notations like in the above example. Here we have made a component out of the span tag by specifying the component type @Insert. Tapestry has many of these component types built-in like @TextField, @TextArea, @For (for loop), etc.

 

So anything dynamic should be thought of as a component (like the span tag above). Next give it the appropriate component type (@Insert). It is very important you understand how the component paradigm works here. So let me try to summarize it again. The component you attach to the standard HTML tag takes over the responsibility of evaluating what its contents should be at runtime. It is that content which is sent out to the browser.

 

I need to step a little ahead before explaining the ‘ognl’ stuff. Thus far we have a Home.html. If you open a browser and point to it you will see that it displays correctly. And this is the other power of Tapestry. Pages are pure HTML so the web designer and the java developer can both view the pages in their own working domains. The web designer in his designer tool and the java developer via his servlet container.

 

Now we need something that will process on the server side events and requests from this Home.html page. Lets write a Home.java.

package catalog.pages;

 

import org.apache.tapestry.html.BasePage;

 

public class Home extends BasePage {

public String getMessage() {

return "Catalog Mania";

}

}

 

The class extends the Tapestry class BasePage and provides one method getMessage. Now lets jump back to line 6 in the Home.html. The string “value=ognl:message” will at runtime be evaluated to the getMessage class in Home.java. OGNL stands for ‘Object Graph Navigation Language’ and is an open source expression language (like EL in JSTL). Please google-it to get more info.

 

So to summarize, Home.html is attached to Home.java. Home.html uses ognl to express the desire to call getMessage for the above span-based insert component.

 

Lastly how does Tapestry know Home.html is connected to Home.java. Nah there is no special default naming convention here. This is done in a Home.page file. I do not like the .page concept one bit. With a large application it will be a pain to maintain so many files but to be fair it serves a purpose and is not a showstopper.

 

<?xml version="1.0"?>

<!DOCTYPE page-specification PUBLIC "-//Apache Software Foundation//Tapestry Specification 4.0//EN" "http://jakarta.apache.org/tapestry/dtd/Tapestry_4_0.dtd">

<page-specification class="catalog.pages.Home">

</page-specification>

 

If you have followed the instructions carefully (including the project structure in eclipse) you should be able to deploy and run this application. Using Jetty that would be.

 


The app should be available at http://localhost:9090/catalog/app. Note that Tapestry by default will resolve to Home.html if no page is requested. You could also request the same page via http://localhost:9090/catalog/app?service=page&page=Home. But you are better off by not hardcoding such links.

 

If you look at line 10 of the Home.html we use a built-in Tapestry component @PageLink to link to another Tapestry page, in this case ‘Catalog’. We have not coded that yet. Tapestry will generate the correct links and also do session encoding when necessary.

 

So we are now passed some basic Tapestry stuff and have displayed the home page as per our requirements. Now the next requirement is to display the list of products in the catalog. We already put a link on line 10 of Home.html to invoke the Catalog page.

 

So now here is the Catalog.html and Catalog.java.

Catalog.html

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<html>

<head>

<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">

<title>Insert title here</title>

</head>

<body>

It is: <strong><span jwcid="@Insert" value="ognl:new java.util.Date()">June 26 2005</span></strong>

<br>

<a href="#" jwcid="@PageLink" page="Catalog">refresh</a>

<br>

<hr/>

Current Product Catalog<br/>

<hr/>

<table border="1" BGCOLOR="#FFCC00">

<tr>

<th>Name</th>

<th>Desc</th>

<th>Product release date</th>

</tr>

<tr jwcid="@For" source="ognl:products" value="ognl:product" element="tr">

<td><span jwcid="@Insert" value="ognl:product.name">name here</span></td>

<td><span jwcid="@Insert" value="ognl:product.description">desc</span></td>

<td><span jwcid="@Insert" value="ognl:product.releaseDate">1/1/1111</span></td>

</tr>

<tr jwcid="$remove$">

<td>Books</td>

<td>book description</td>

<td>1/1/1111</td>

</tr>

<tr jwcid="$remove$">

<td>Toys</td>

<td>toy description</td>

<td>1/1/2222</td>

</tr>

</table>

<hr/>

<p>

<a href="#" jwcid="@PageLink" page="AddProduct">Add New Product</a>

</p>

</body>

</html>

 

Two things worth mentioning in Catalog.html. First the use of ‘jwcid="$remove$"’. Like I mentioned previously Tapestry pages can be viewed in a regular browser without a servlet container. In this case obviously none of the Tapestry components are evaluated at runtime but the page being standard HTML will be displayed. In the case of the table above, it will be displayed with two rows (Books and Toys). The ‘remove’ jwcid tells Tapestry to ignore those rows at runtime. Thus the page works fine for the web designer and the java developer. This is not possible with a JSP+JSTL approach.

 

The other thing worth mentioning is

<tr jwcid="@For" source="ognl:products" value="ognl:product" element="tr">

 

We use the loop component, @for, to display all of the products returned from Catalog java class. This connection to the java class is denoted by source="ognl:products". The value parameter gives a name to a temporary variable that will hold the current product, as the loop is evaluated. Thus we are able to do the following:

<span jwcid="@Insert" value="ognl:product.description">desc</span>

 

‘product.description’ will resolve to Catalog.getProduct().getDescription().

 

Here is Catalog.java.

Catalog.java

package catalog.pages;

 

import org.apache.tapestry.html.BasePage;

import catalog.service.Product;

import catalog.service.ProductService;

import catalog.service.ProductServiceImpl;

 

1. public abstract class Catalog extends BasePage {

2. public abstract Product getProduct();

public abstract void setProduct(Product p);

 

// hard coded the backend service for now

// hmmmm ?? need to see if we can inject this from Spring ??

3.  ProductService service = new ProductServiceImpl();

 

4.
public Product[] getProducts() {

return service.getProducts();

}

}

 

Line 1: Class is now abstract. Tapestry pools the page classes for reuse. This being the case we have to avoid putting instance variables in the class. That would require us to do cleanup every time the page class is reused. Rather than us doing this we can avoid instance variables and provide abstract getters/setters for interested properties. Tapestry will now take care cleaning up the instance before handing it out for use in a fresh request invocation.

 

Line 2: We need it so our for loop will work correctly. The value=’ognl:product” uses this instance variable on the page class to store the contents each time it goes through the loop. Why I have no idea? Shows I have still things to learn.

 

Line 3: Our mock backend product service.

 

Line 4: source="ognl:products" connects to getProducts on the class.

 

That’s it. Remember the Catalog.page. Once you have that you can navigate to the following two pages successfully.

 

Clicking on the link takes you to.


 

Thus far you may have realized that Tapestry is indeed very different from other frameworks. But it does take some learning effort. But its well worth it.

 

Our final requirement is to add a new product and redisplay the catalog page (the new product should show up).

 

Here is AddProduct.html

AddProduct.html

<html jwcid="@Shell" title="Add Product">

 

<body jwcid="@Body">

<h1>Add New Product</h1>

<form jwcid="form@Form" success="listener:doAddProduct">

<table>

<tr>

<td>

<label jwcid="@FieldLabel" field="component:name">Name</label>:

<input jwcid="name@TextField" value="ognl:product.name" validators="validators:required" displayName="User Name" size="30"/>

</td>

</tr>

<tr>

<td>

Description:

<textarea jwcid="description@TextArea" value="ognl:product.description" rows="5" cols="30"/>

</td>

</tr>

<tr>

<td>

Release Date:

<input jwcid="releaseDate@DatePicker" value="ognl:product.releaseDate"/>

</td>

</tr>

</table>

<input type="submit" value="Add Project"/>

</form>

</body>

</html>

 

Some of the new components we used here:

  • @Shell – generates the html, head and title tags. Helps to resolve style sheet names at runtime.
  • @Body – this generates the HTML body and any javascript that goes with your tapestry page.
  • @FieldLabel – used to display a field label that is attached to a TextField in this example. In our example above if validation for required field fails the two components (FieldLabel and TextField) know to display the right UI behavior.
  • @TextArea –HTML text area component
  • @DatePicker - a javascript calendar object.

 

Refer to the online documentation at http://jakarta.apache.org/tapestry/tapestry/ComponentReference/Shell.html for more details on shell and also all of the other built-in components.

 

Here is AddProduct.java

Catalog.java

package catalog.pages;

 

import java.util.Date;

import org.apache.tapestry.IPage;

import org.apache.tapestry.annotations.InjectPage;

import org.apache.tapestry.event.PageBeginRenderListener;

import org.apache.tapestry.event.PageEvent;

import org.apache.tapestry.html.BasePage;

import catalog.service.Product;

import catalog.service.ProductService;

import catalog.service.ProductServiceImpl;

 

public abstract class AddProduct extends BasePage implements

PageBeginRenderListener {

ProductService service = new ProductServiceImpl();

 

@InjectPage("Catalog")

public abstract Catalog getCatalogPage();

 

public abstract Product getProduct();

 

public abstract void setProduct(Product p);

 

// from PageBeginRenderListener

public void pageBeginRender(PageEvent event) {

Product project = new Product();

project.setReleaseDate(new Date());

setProduct(project);

}

 

public IPage doAddProduct() {

service.addProduct(getProduct());

return getCatalogPage();

}

}

 

The method pageBeginRender is from the interface PageBeginRenderListener. Tapestry invokes this method, as the name suggest, before rendering the page. Here we can do apply some default behaviour. For example when the AddProduct.html is displayed we want to provide a default values for the release date field. This is another good example of how Tapestry forces us think of web development from a Object Oriented point of view using these page classes.

 

Another very important part of the code is:

@InjectPage("Catalog")

public abstract Catalog getCatalogPage();

 

Remember a page in Tapestry is represented by three files; the .HTML file with the display template, the .java file with the processing logic and the .page file being the glue between the .HTML and .java code. So whenever I say go to another page I meant this logical page represented by the three things mentioned here.

 

After the doAddProduct method is finished doing its business we would like to return to the Catalog page and display the list of products once again. This is done by injecting the Catalog page into the AddProduct action.

 

Note: All of what we have talked thus far can be done using JDK 1.4. But wherever we use annotations we would have to enter XML into the .page file instead.

 

Once again do not forget the AddProduct.page. Compile and redeploy and you should be able to get to the add product page which when done takes you back to the catalog list page. You should see the new product you just added in the list.

 

Final Notes:

Did I mention my dislike for the .page files. Maybe it’s just me, but I just think it’s redundant. I reduced my Home.page file to

<page-specification>

</page-specification>

 

Note I removed the class name attribute. I made sure WEB-INF\catalog.application had the following:

<application>

<meta key="org.apache.tapestry.page-class-packages" value="catalog.pages"/>

</application>

 

This tells Tapestry where to look for the page classes. Having done this I thought I could get rid of my empty Home.page file above. No luck. As soon as I did that tapestry blew up with an exception.

 

Using Tapestry involves a steep learning curve and a shift in mindset on how you develop web applications. I personally feel like I have only scratched the surface thus far. In the weeks to come I hope to have a follow-up article on Tapestry using some more of its features and built-in components. And maybe we can even write our own component. Yes that is entirely possible.

 

Flat File Parser
November 7, 2005 12:00 AM

After a few projects where I had to parse through legacy flat files I decided enough was enough and decided to write my own parser. This parser would do exactly one thing efficiently and that was convert lines from the flat file to java objects. I wanted something that was thin and did exactly what I mentioned above and no other frills. Though now that I have it working a few frills may be in order . I have created a project at JavaForge where this tool will reside. If you do find it useful please drop a comment in the discussions forum on the javaforge site javaforge.com/project/2066. The goal is to parse a flat file (either character separated columns or fixed length columns). The parser supports two methods of parsing a file. In the first approach you are responsible for reading the file and providing each line that needs to be transformed to the transformer. The second approach is SAX-like, in that you register a listener and the transformer will call your listener whenever it finds a record and also when it could not resolve a record. First let's run through the first approach and at the end I will show you the SAX-line parsing approach.

Let’s create a java bean class to represent our record with space character separated columns.

import org.aver.fft.annotations.Column;

import org.aver.fft.annotations.Transform;

@Transform (spaceEscapeCharacter="_", recordIdValue="88")

public class DelimitedBean

{

.....

@Column(position = 1, required = true)
public int getRecordId()
   return recordId;
}

@Column(position = 2, required = true)
public String getNameOnCard()
   return nameOnCard;
}

@Column(position = 3, required = true)
public String getCardNumber()
   return cardNumber;
}

@Column(position = 4, required = true)
public int getExpMonth()
   return expMonth;
}

@Column(position = 5, required = true)
public int getExpYear()
   return expYear;
}

@Column(position = 6, required = true)
public double getAmount()
   return amount;
}

@Column(position = 7, required = true) public
String getCardSecurityCode()
   return cardSecurityCode;
}

@Column(position = 8, required = true, format = "MMddyyyy")
public Date getTransactionDate()
   return transactionDate;
}

... other methods here ...

}

As you can see we use Java 5.0 annotations to mark our record format. By default the parser sets itself up to parse character separated columns and the delimiter is space.

@Transform (spaceEscapeCharacter="_", recordIdValue="88")

By default the parser is setup to parse character-separated columns. The attribute spaceEscapeCharacter indicates the character used to represent spaces within column data. The parser can replace that with space before loading it into your java object. The recordIdValue identifies the value of the key column. The transformer keeps an internal mapping of the key value to the java bean class that represents it. By default the first column is the key column. You can change that by passing in parameter recordIdColumn for character separated columns or using recordStartIdColumn / recordEndIdColumn for fixed length columns. By default the column separator is space for character. You can change that using columnSeparator.

That’s enough on defining the file format. Now here is how to actually read it.

Transformer spec = 
   TransformerFactory.getTransformer(new Class[] { DelimitedBean.class });

String line = 
   "88 Mathew_Thomas 4111111111111111 02 2008 12.89 222 10212005";

DelimitedBean bean = (DelimitedBean) spec.loadRecord(line);

You get a transformer instance as shown above. Pass it an array of all classes that represent your various records and that uses annotations as defined above. Now you have a fully loaded bean from which to read your data. That’s all.

Now lets see how you define the same for a fixed column record format. The parsing code above stays the same. The difference is in how you annotate your result bean class.

import org.aver.fft.annotations.Column; import org.aver.fft.annotations.Transform;

@Transform(spaceEscapeCharacter = "_", columnSeparatorType = Transformer.ColumnSeparator.FIXLENGTH, recordIdStartColumn = 1, recordIdEndColumn = 2, recordIdValue=”88”)
public class FixedColBean {

@Column(position= 1, start = 1, end = 2, required = true)
public int getRecordId()
   return recordId;
}

@Column(position= 2, start = 3, end = 15, required = true)
public String getNameOnCard()
   return nameOnCard;
}

@Column(position= 3, start = 16, end = 31, required = true)
public String getCardNumber()
   return cardNumber;
}

@Column(position= 4, start = 32, end = 33, required = true)
public int getExpMonth()
   return expMonth;
}

@Column(position= 5, start = 34, end = 37, required = true)
public int getExpYear()
   return expYear;
}

@Column(position= 6, start = 38, end = 43, required = true)
public double getAmount()
   return amount;
}

@Column(position= 7, start = 44, end = 46, required = true) public String getCardSecurityCode() {
   return cardSecurityCode;
}

@Column(position= 8, start = 47, end = 54, required = true, format = "MMddyyyy")
public Date getTransactionDate()
   return transactionDate;
}

… other methods here …

}

The parsing logic stays the same. Just give it the correct line of data.

Now I will show you the SAX-like parsing approach.

package org.aver.fft;

import java.io.File;
import junit.framework.TestCase;

public class DelimitedFullFileReaderTestCase extends TestCase { 
   public void testFullFileReader()
         Transformer spec = TransformerFactory.getTransformer(new Class[] { DelimitedBean.class }); 
         spec.parseFlatFile(new File("c:/multi-record-delcol-file.txt"), new Listener()); 
   }

class Listener implements RecordListener
   public void foundRecord(Object o)
      bean = (DelimitedBean) o; System.out.println(bean.getNameOnCard()); 
   }

   public void unresolvableRecord(String rec)
   }

}

}

I have this project located at: www.javaforge.com/proj/summary.do?proj_id=271

Unit Testing Woes
July 14, 2005 1:00 AM

Unit Testing Woes

While unit testing is expected to be an integral part of every development effort, it is often not given its due importance during planning stages. When you derive the LOE (level of effort) for development tasks do you make sure to include unit testing as part of that calculation.

Let’s start with the question “what is unit testing”?

It is the effort of selecting a unit of code and writing an independent set of code that exercises one or more features of that unit.

You can define the unit as either a class or a set of classes that work together to deliver a certain feature. I would not waste time writing unit test for every java class you create. Select a logical unit of work that suites your needs (and project schedule) and write tests against that. Make sure the test exercises your code and not integration with other developers’ units. It is a good idea to write integration test cases but start with your own unit first. If you have dependencies with other components you can choose the strategy of mocking those components. This way you are free to test your code and do not get bogged down by other components. Let’s say you are testing a backend component that accepts credit card information from users, does some validations on the user data, stores it in the database and then submits the transaction to a credit card authorization provider. Obviously you are not interested in testing the authorization provider’s code and you may not even have access to that provider during development. You can write a few classes and mock out the provider or write a simple emulator to emulate the actions of the provider. Later on, as part of your developer integration testing, you can swap in the real providers test environment for more thorough testing.

Often the importance of unit testing is not understood. During a recent project status meeting, for a release that I was not actively involved in, the build manager asked “should we run the automated unit tests that used to run every night during the previous release?” Everyone stared at each other with a silence and then there were some quiet shaking of the heads to say “no”. I was shocked to see this reaction. The team for the current release was not writing a single new test case and on top of that had the boldness to choose not to run any of the tests that already existed. The unit test execution was completely automated. You add your unit test case class name to an XML file and that’s it. It would be included in the nightly build and test cycle. Come next morning and you have a neat HTML page with the complete test results. Finally it was decided that once development finishes everyone can spend a week and write test cases. I disagree with this approach too, since it comes a little too late. It was disappointing to say the least.

When experienced developers make this choice, how do we convince everyone else about the importance of unit testing? Let’s start with some questions you can ask yourself. How do you know if the “thing” you coded works? How do you test this “thing” everyday to make sure it still works? How does someone else make sure that the “thing” continues to work long after you, the original developer, are off the project? The answer is obvious. Write unit tests.

Some of the challenges often encountered in projects are:

  • Schedule is tight and has not accounted for unit testing. So developers have no time to write unit tests.
  • Management (often inexperienced in good software development practices) does not understand the importance of unit testing.
  • Developers do not take the time to write tests.
  • Test coverage cannot be achieved on all portions of the system and that sometimes causes misunderstanding. Questions like “hey you said you had unit tests for A, B and C, why is there none for X and Y scenarios”. Please understand that for a fast paced project getting some unit tests out there is better than none. Writing unit test is a coding task that requires time.

Remember that in spite of unit tests, defects will show up during testing. On a recent project the QA lead went on to make this an issue. Often these kinds of statements come from immature managers/leads that really have very little software development process background. Or they are just plain ignorant. Getting total coverage is impossible given the schedules for many fast-paced projects.

So what are some of the options for us Java developers to write unit tests? There are many options but I will cover the following in brief:

  1. JUnit.
  2. TestNG.
  3. Custom framework using JDK 5.0 annotations.

JUnit

JUnit has been around for a while now. It is a simple framework that allows one to write Java test classes. Your classes follow a certain convention in naming the test methods.

import junit.framework.*;

public class MyTestCase extends TestCase {

protected void setUp() {

.. set up test data ...

}

protected void tearDown() {

}

            public void testAddVisaTransaction() {

}

public void testAddMasterCardTransaction() {

}

}

  • Your class extends TestCase which is a JUnit framework class.
  • You can optionally extend the framework method setUp and do just that.
  • Name your test methods starting with the word ‘test’.
  • You can optionally extend the framework method tearDown to perform some clean up.
  • You can choose to group your tests using a TestSuite by overriding the method “public static Test suite()”. This allows you to run multiple tests and execute them in a certain order.

You can easily integrate the execution of the tests in an Ant script. You can use the optional JUnit and JUnitReport ant tasks to execute the unit tests and produce a nice HTML report of the test results. Setting this up should not take you more than a day. Plug this into your nightly build cycle and you have a simple yet immensely powerful automated unit testing execution strategy. Time spent on this is time well spent.

TestNG

TestNG is a nice little framework that takes a slightly different approach in the way you write your test classes. Suffice it to say it’s less intrusive (that is if you can call JUnit intrusive). With TestNG you do not extend any framework classes nor do you have to name your tests methods in any particular format. You use annotations (either JDK 5.0 annotations or javadoc style annotations if you are using JDK 1.4.x). Let’s see an example with JDK 5.0 annotations.

import org.testng.annotations.*;

public class MyTestCase {

   @Configuration(beforeTestClass = true)

   public void setUp()  {

      .. set up test data ...

   }

   @Test(groups = { "mygroup" })

   public void testAddVisaTransaction(){

   }

   @Test(groups = { "mygroup" })

   public void testAddMasterCardTransaction() {

   }

}

Personally I like this approach better. With the introduction of annotations in JDK 5.0 this style is definitely going to become the preferred approach. Note you can name the above methods with any name you choose. I simply ported the previous JUnit test to TestNG.

Custom framework using JDK 5.0 Annotations

I would suggest you stick with TestNG, but if you want to create your own framework its not too hard now. Annotations are probably the most important new feature in JDK 5. While I have seen some examples of code with annotations which almost drove me up the wall, in most cases it is a much calmer experience.

Let’s create a set of custom annotations to create a simple test framework. We will create the following annotations

1. @TestCase – will be used to mark a class as a test case. Will allow the tester to give a description to the test case.

2. @Setup – Used to mark one or more methods as set up methods.

3. @Test – Used to designate a method as a test method to execute.

First we will create the @TestCase annotation. This will be used to mark the class as a test case and provide some useful description of the test class.

import java.lang.annotation.*;

@Retention(RetentionPolicy.RUNTIME)

@Target(ElementType.TYPE)

public @interface TestCase  {

   String description();

            }

· Import the annotation java classes from java.lang.annotation.

· @interface – keyword is used to mark the, otherwise regular, java class as an annotation definition class.

· The annotation we are creating is itself annotated with meta-annotations.

· @Retention(RetentionPolicy.RUNTIME) – these annotations can be read at run-time.

· @Target(ElementType.METHOD) - this annotation only applies to methods.

Next we will create the @Setup annotation. This one is used to mark the methods that are set up in nature. These will run before any of the tests are executed.

import java.lang.annotation.*;

@Retention(RetentionPolicy.RUNTIME)

@Target(ElementType.METHOD)

public @interface Setup {

}

Finally we will create our @Test annotation that marks individual methods as test methods which are to be executed.

Import java.lang.annotation.*;

@Retention(RetentionPolicy.RUNTIME)

@Target(ElementType.METHOD)

public @interface Test {

            }

Now let’s use these annotations in a sample test case class.

import com.unittest.*;

@TestCase (description="My test case description.")

public class MyTestCase {

@Setup

public void setUpShit() {

   System.out.println("Setup invoked.");

}

@Test

public void doTest1() {

   System.out.println("doTest1 invoked.");

}

@Test

public void runTest2() {

   System.out.println("doTest1 invoked.");

}

            }

I am sure you will agree that the entire exercise so far is not complicated in any way. I am using the latest Eclipse 3.1 to write this code. Eclipse 3.1 supports building custom annotations and it will invoke the annotation compiler for you. Now to create the test harness that will execute the tests. The class java.lang.Class has been updated in JDK 5.0 to support annotations. You will see that in the next sample code.

import java.lang.reflect.*;

import com.unittest.*;

public class TestRunner {

   public static void main(String[] args) throws Exception  {

Class testClass = Class.forName(args[0]);

// Check if the method is annotated with @TestCase.

if (!testClass.isAnnotationPresent(TestCase.class)) {

System.out.println("Test classes must be annotated with @TestCase.");

System.exit(1);

}

// Print the test case description.

TestCase testCase = (TestCase) testClass.getAnnotation(TestCase.class);

System.out.println("TestCase description ==>> " + testCase.description());

// Get an instance of the target test case to be executed.

MyTestCase target = (MyTestCase) testClass.newInstance();

// Execute only the 1st @Setup annotated method (if one exists).

for (Method method : testClass.getDeclaredMethods()) {

if (method.isAnnotationPresent(Setup.class)) {

method.invoke(target);

break;

}

}

// Execute the @Test annotated methods.

for (Method method : testClass.getDeclaredMethods()) {

if (method.isAnnotationPresent(Test.class)) {

method.invoke(target);

}

}

   }

}

That’s it we are done. Pass the test case class name to the TestRunner and you will see your test executing. To keep this simple I have not included any exception handling to any of the code above.

Conclusion

There are many options to build your unit testing strategy. Pick any one that suites your needs but just use something.

Achieving Better Software Code Quality
June 4, 2005 1:00 AM

Code Quality. Have you achieved it. If yes - Congrats and bi. If not read on.

Achieving Code Quality is the holy grail of software development. We feel it deep inside us. We know its there somewhere but alas we are not able to get to it. So what is software quality and why is it so difficult to achieve?

One project I worked on management told us to build “a zero defect system”. I laughed because knowing the schedule and chaos on the project this was as good as finding a real superman and then asking him for a ride to the moon.

So what is Code Quality? Is it code which produces very low number of defects or is it  code that is well documented or code that has a good design behind it or is it a measure of how well you have unit tested it or is the pretty/well-formatted looking code.

Quality code can be achieved by following a few basic principles:

  • Think about what you are building and why and for who. And think through this many times.
  • Think about the design you want to put in place after you have answers to the earlier principle.
  • Communicate often and break unnecessary communications walls. Let everyone feed on the information. Let no one be the guardian of information. This is especially true when gathering and communicating requirements.
  • Your code is not complete without unit tests.
  • Your unit tests are not complete if you only exercised the happy paths. Thus your code is not complete too.
  • Have you followed well established design patterns?
  • Does your Code Smell - http://c2.com/cgi/wiki?CodeSmell
  • Classes, variables and methods are named with self documenting names.
  • Conduct early design reviews.
  • Review your code. 
  • Re-factoring is your friend.
  • Plan for extensibility.
  • Ensure you do not incur technical debt - http://bit.ly/N6IHm .
  • De-link yourself from the code. 
The last principle might look strange here. But think about it. Software development is all about people and how well our internal egos interact with others. Once individuals write the code they unconsciously feel that the code is a reflection of their intellect. Any criticism of the code then becomes a very personal matter. That is why I encourage early design/code reviews. Then its everyone's ideas and not just one persons.

It might seem like a lot of work to implement these principles, but I am reminded of the following excerpt from - http://c2.com/cgi/wiki?CodeSmell

Highly experienced and knowledgeable developers have a "feel" for good design. Having reached a state of "UnconsciousCompetence," where they routinely practice good design without thinking about it too much, they find that they can look at a design or the code and immediately get a "feel" for its quality, without getting bogged down in extensive "logically detailed arguments".

My thought is that - Each and  every developer needs to strive to attain UnconsciousCompetence. Only way you can achieve that is by following the principles above and making it part of your being.

Can we really get to a zero defect system? I have yet to be part of one and I think its impossible. So rather than trying to build a zero defect system lets try to build a system with reasonably low number of defects, which has well-documented code, consistent coding guidelines, unit tests that achieves maximum code coverage. So here is how I define quality code.

Quality Code is code that has a well thought of reason for its very existence, is backed by a solid design, is testable, has repeatable unit tests, is self-documenting and has extensibility built into its very core.

To achieve good code quality everyone has to play their part.

  • Management
    • Build and maintain an environment in which the team has the highest probability of success.
    • If you do not understand technology do not ask the developer to get it done in 1 hour.
    • Ensure that a simple, repeatable and well defined development process is followed. I prefer Agile methods and practices.
    • Encourage collaboration between all players.
    • Break down communication barriers.
    • Realize that LOE's are only guess-estimates. They have no reason to even exist.
    • Realize that creating repeatable unit tests is a coding task and often takes a good amount of time.
    • If you have a QA team that does formal testing, make sure they have defined processes in place. Have them hook into the development process early on. The more time they spend on the domain the better prepared they are to create test cases that have maximum coverage.
    • Invest in methods to gather project statistics. Define them and gather them diligently and continuously. Never use them against an individual though.

  • Developer Responsibilities:
    • The next time you check in some code without design or unit testing realize that you have just checked in code whose quality is suspect.

    • Ask for a requirements document (use case, user story) that is not in an email or in your voice mail.
    • If requirements change during construction and someone comes to you saying “hey remember last week we talked about this…now they want it differently”. Stop any urge to begin changing the code. Ask for the change to be formally included into the schedule and prioritized.
    • Write repeatable unit tests. Use JUnit or any other framework you want. But use something.
    • Have you heard of Continuous Integration!
    • When unit tests fail you are supposed to fix them.
    • Think TDD (Test Driven Development).
    • Have your peers review your design and code. This not only helps in improving quality by finding issues but also allows acts as a knowledge sharing mechanism. So if your developer is kidnapped by Martians then you can rely on the others who have some idea of that code.
    • If you think the LOE was not enough tell your project manager as soon as you realize that.
    • Look at defects your testing team finds against your code as a positive thing. That is one less defect in production.
    • Never leave performance to the end.

  • Testing Responsibilities:
    • Understand your requirements very well. You have the task of matching the requirements to what was built. So your job is very important. It’s the last sanity check before the client gets hold of the software.
    • Write test cases.
    • When you have requirements questions its NOT OK to ask the developer. Try asking the analyst or whoever is responsible for gathering the requirements.
    • Don’t let a developer walk you through testing scenarios. It’s your job to come up with those in your test cases.

Your value is not just in getting the work done but it’s also in how well you approach your work. And quality will follow.

Intelligent Software Agents in Knowledge Management
June 1, 2005 1:00 AM

The goal of this article is to introduce the reader to the concepts and theory behind the knowledge management process and how intelligent software agents can help to manage the knowledge management process.

Before we get into the crux of this article it is important to understand why we even discuss this topic. Various types of Decision support systems (DSS) are widely used in enterprises today. Organizations might need an EIS (enterprise information system) that not only caters to the needs of management but also serves the needs of the rest of the enterprise. Needs are obviously defined by what role each person plays. For example an EIS can be used to check the demand and match it closely to the supply or perform forecasting based on trends or patterns in the data. The validity of the results from this process is dependent on the knowledge that was used to come to this result.

The key to the success of a DSS system is to have access to a reliable, valid and growing knowledge base. Human actors will often enter knowledge into the knowledge base directly. But that may not be feasible or many times not possible. Here is where intelligent software agents can help.

Intelligent Software agents, in the context of knowledge management, are automated software modules that act on behalf of the knowledge management system to automatically collect knowledge, validate it, organize it and then add it to the knowledge base.

Knowledge

Typical production databases are transactional in nature. They can be seen as the database of operations, where all business transactions take place. Orders are placed, inventory is tracked, and customers are managed among other activities. Here the onus is on managing data and the challenge is to have efficient and reliable access to this data. Data is often organized into tables to form meaningful information.

But there is a parallel requirement in many large enterprises to have a different view of the data. A view that is used by upper management (and others) to track sales, to forecast trends (like demand and supply), to trouble shoot specific performance problems, etc. Here the onus is not on pure data. Instead what is needed is to consolidate the information from the many databases spread across the organization and bring them together to provide what we term knowledge. Knowledge is giving meaning or more substance to data, so that decisions can be made using this knowledge.

Knowledge Base

Knowledge is typically collected and organized into a knowledge base (similar to how data is stored in databases). Typically these knowledge bases could also be located as data warehouses and data marts and they are separate from the operational databases. In fact the latter should be requirement for your knowledge base. Some of the activities that are run on a knowledge base can be very intense and could slow down your already fully loaded operational database.

Typically the knowledge base is another DBMS that caters purely to the knowledge management subsystem. This could be an RDBMS (like Oracle, DB2) or you can even use XML enabled databases.

Knowledge Management Process

Knowledge management is the process of collecting, rearranging and validating data to produce knowledge. Knowledge can be gathered from various sources like

  • Human actors using the system.
  • Automatic feeds from internal or external sources.
  • Periodic manual feeds from internal or external sources.
  • Random feeds from internal or external sources.
  • Knowledge engineers working in tandem with experts.
  • Experts.

Knowledge can be gathered from any of the above sources (maybe from all too). Here is a simple checklist to keep in mind when gathering knowledge. Assume the system is getting a new feed of data, which is to be entered into the knowledge base. In a good knowledge management process…

  • It is important to identify knowledge sources and also verify that the data is coming from the same reliable sources (Capture phase).
  • The data should adhere to pre-defined formats (refine phase).
  • The data should follow conventions for any business rules that have been previously identified (refine phase).
  • The data needs to be massaged to a form that can be understood by the knowledge collection subsystem (refine phase).
  • The massaged data needs to be validated to check for correctness and verified to check if it follows all the pre-defined business rules (validate phase).
  • Finally the newly constructed knowledge should be added to the knowledge base (store phase).
  • The process should be able to react appropriately in case of errors in the feed.
  • When knowledge is requested, it will be retrieved and if needed consolidated with other knowledge and returned (Disseminate phase)

An important function of a knowledge management process is to continuously grow its knowledge base (following the process we outlined above). An inference engine will use this knowledge to provide value to a client. An inference engine is only as good as its backend knowledge base.

Very often it is not feasible to leave it to human agents to enter data into the knowledge base. Sometimes the data may be so large that this is not possible. Or the information may arrive periodically at predefined times or maybe it follows no time schedules. Here is where we can use intelligent software agents.

Intelligent Software Agents

Intelligent Software agents are independent autonomous software programs that gather knowledge by following the process we outlined earlier. In doing so they require no help from human agents. The process is completely automated. They are termed as intelligent because they possess all process information on how to intelligently read incoming information and convert it to knowledge to be stored into the knowledge base. These agents can be written in various programming languages such as C, C++, Java, etc. They can also be implemented using newer technologies such as Web Services.

Agents can be one of two types; static agents or mobile agents. Lets discuss each in some detail and also how they can be used in the knowledge management process. There are other classifications of software agents. But for the purposes of this paper we will concentrate on static and mobile classifications only.

Static Agents

Static agents are called so based on the fact that they do not move or relocate themselves from the computer that started them. If a particular computer starts a static agent then the agent will continue to run on that very computer throughout its lifecycle.

The life cycle of a static agent can be better understood using the diagram below.

Initially there are no agents in the knowledge management system. The agents come to life either when the first knowledge gathering task is initiated or a pre-configured number of agents can be set up to be in a pool of free agents. When a new knowledge task arrives either an available agent from the pool is allocated or a new one is created.

When the agent is running, it is at that point that the knowledge process we outlined earlier is applied. When the agent finishes execution it is added back to the pool of available agents. Keeping a pool such as this can be useful in improving the reliability and scalability of the knowledge management system.

Next question we need to ask is how are these agents called by clients. There are many ways that this can be done. But lets discuss one very innovative method. Today Web Services is the “in-thing” in the tech world. Beyond the hype, this technology is extremely viable for implementing static intelligent agents. Web Services allow us to expose interfaces on existing or new business objects as available to our business partners.

Our static agent could be designed as a J2EE (or .NET) object running on a remote server. This object though private to the server exposes some of its interfaces using the web services suite of protocols (SOAP, WSDL, UDDI). Clients will typically call one of the interfaces on our web services. Clients who need to feed in data can do so using appropriate web service interfaces. They would provide the data as an XML document that has a predefined XML Schema. Due to the use of XML Schema there is naturally a strict adherence to data formats and some amount of data validation is applied right here at this step. This can save an enormous amount of computing space on the server and can better facilitate the knowledge validation process.

Static agents can also be of a reactive kind. Wherein they react to new data that is added into a database or new data that an application server receives. In such cases the application server or database can spawn a reactive static agent and delegate to it the task knowledge processing. Or these agents can run in the background always waiting for new data to come in. Once they detect new data they read it and run the knowledge gathering process to move it into a separate knowledge base. Very often though knowledge creation is done at predefined times, maybe once a day. During that time slot the agent will read the operational database and process the new or updated data. For large amounts of highly volatile data this can be very beneficial approach.

Static agents are especially well suited for data-mining tasks. Typically data mining involves wading through large amounts of data in a data warehouse or data mart to find patterns in existing knowledge. Agents can perform these background tasks based on either predefined time intervals or maybe it is triggered by the arrival of certain data or simply when a request for knowledge access comes in.

Mobile Agents

This is a very innovative field of research. Ironically what makes it so innovative is also a reason why this technology is not in common use.

Mobile agents are software modules that do not necessarily stay on the server that initiated them. Simply put these agents travel. Say we start an agent on one computer (the parent). To perform its work the agent needs to communicate to a remote server. The agent might start performing some of its duties on the parent computer and then decide to move from the parent towards the remote server. In doing so it might decide to travel the network and move ever so close to the remote server. Finally once it finishes its task it will notify the remote server or it may even destroy itself. The parent can at all times send messages to the agents, such as control messages.

Some agents might interact with other static or mobile agents to perform its task. Some may even spawn additional sub-agents to delegate some tasks. At all times the agent maintains a reference to the parent server.

One would ask how could this be of any use in a knowledge management system. The answer is simple. It depends on what type of information your knowledge base is tracking. Lets say we have an enterprise with a large globally distributed computing facility. The network is so complex that it has become difficult to track what is happening on this network. It has become difficult to collect performance and security related information. And we need to periodically have this knowledge added to a knowledge base, so that we can later analyze and maybe even predict network performance.

We can create a mobile agent that will roam our network, moving from one node to the other and always collecting network performance statistics as it roams. Periodically the mobile agent can send the information back to the parent, which can then add it to the knowledge base. The agent can communicate with the network elements using SNMP. Based on this simple yet realistic example you can see the power behind mobile agents.

Due to the mobility of these agents they may be limited to gathering data only and maybe performing some initial validations on it. Once they gather data they can call a static agent on the parent to perform the remaining tasks. It is important that the mobile agent keep doing its main task, which is to keep moving and gathering new data.

Limitations of Mobile Agents

Mobile agents face many challenges among them are security concerns, what if someone tampers with the agent runtime code, how does the agent find a suitable platform from which to execute as it moves, how does the parent know that the data it is receiving is from the agent and not from some other malicious agent, how does the parent know if a child agent is still alive, what if the agent looses ability to communicate to the parent, etc.

Conclusion

Automated knowledge management using intelligent software agents is very much a reality. The advances in newer technologies such as J2EE, Web Services, .NET allow us to create more reliable, scalable and secure intelligent agents. Static agents are more common compared to mobile agents. But as was discussed earlier mobile agents are definitely useful in certain types of applications.

Resources

For information on XML Schema refer to http://www.w3.org/XML/Schema

ADK for mobile agent development http://develop.tryllian.com/

IBM's Aglet mobile agent development

Voyager from http://www.recursionsw.com/

Software Agents on Distributed Knowledge Management Systems (DKMS Brief No. Three, July 30, 1998