Monday, December 19, 2011

Java: Pass by value or Pass by reference

Java: Pass by value or Pass by reference?
Short: Pass by value

This is basically for memory optimization, as I would like to think. Primitive data types have specific length whereas objects do not. In the memory both of these are stored in a continuous sequence of bits. The primitive types are stored from, if you will, low number to high number whereas objects are stored from higher number to lower in order to find the continuous free blocks.

It is generally agreed that primitives are passed by value and objects (memory address) are referenced. However, even the object references are passed by value.

In C# you can actually pass by reference and the original object also gets modified.
E.g
void passMe(ref OrginialObj obj)
{
obj = ....;//original object is modified even after the method returns
}


 

Interface v Abstract Class (in my own words)

Interface based programming works very closely with DI (P2I or program to interface). The container will push the concrete implementation at runtime so the code is loosely coupled. It also promotes Test Driven Development. Why? You can by-pass the configuration files that might be looking to resolve the resources.

As for difference between abstract classes and interface, both are basically contracts that need to be followed by the implementing classes but interface allows the contract (of the methods) to be "upto" the developer. It also means the developer can now implement as many interfaces as necessary. I would like to think of Abstract Classes as "more strict". Programatically, you would be allowed to implement from only one abstract class, neither can be instantiated, and abstract classes can have concrete method definitions.

Sunday, July 17, 2011

Thinking Agile

Today I was talking to a new developer about how Agile works and how it has become almost the de-facto methodology replacing waterfall. He seemed to buy into what I was saying. I told him how we need to embrace change rather that fight change and how you add value to your product if you identify what works and what does not, ultimately create a product that meets what the customer may not have envisioned in the first place, but in fact exceeds that. Basically I was explaining him how you start to work on version 1.0 and at the end of it, you might actually have a version 2.0.

I think Agile works for smaller projects also. Basically, if we have the mindset of change is acceptable and re-work is actually refactoring (making better) and we can make clients understand that as well (because more hours also mean better product and little more money as well) we could do Agile for small projects as well.

As far as a tool to suggest in order to do manage agile projects, there is rally. Community version is free to use for up to 10-man project. But we can manage only 1 project in the community edition. It is hosted on rally’s on server. Another tool is JIRA with greenhopper plugin. It is one of Atlassian tools. Other good tools from Atlassian family are Cruicible for peer review and bamboo for continuous integration. These other tools make sense only for Enterprise Level application. I am sure most of us are familiar with these and other tools as well.

Thursday, June 9, 2011

Unit Testing Hibernate Data Access Objects using JUnit 4 – Part II

In part I we setup the infrastructure or the framework for unit testing. In this part we will write out domain/entity class, dao interface, dao implementation test and then dao implementation. When we write our test we know it will fail because no such method will exist in the dao implementation. However, we will need to create the implementation class, albeit without any methods. So, let's get straight to it.

@Entity
public class Item {

@Id@GeneratedValue(strategy=GenerationType.AUTO)
private Long id;

@ManyToOne
private Order order;
private String product;
private double price;
private int quantity;
/**
* @return the id
*/
public Long getId() {
return id;
}

/**
* @return the order
*/
public Order getOrder() {
return order;
}
// --- getters and setters follow.
//--- override the ToString()and HashCode()

public class Order{

//Other instance variables ....
@OneToMany(cascade=CascadeType.ALL)
@JoinColumn(name="ORDER_ID")
private Collection items = new LinkedHashSet();

/**
* @return the items
*/
public Collection getItems() {
return items;
}
/**
* @param items the items to set
*/
public void setItems(Collection items) {
this.items = items;
}
}


Few things to note here if you are using hibernate are ITEM has many to one relationship with ORDER- an order has many items whereas an item belongs to an order. In the database you will have ORDER_ID column in the ITEM table. Cascade.ALL means whenever ORDER is deleted, corresponding ITEM is set to null. We will not update the id of ORDER because it is auto-generated. So update to an ORDER does not have any bearing on ITEM. The ids are auto-generated.

Now that we have our domain objects, we will write ItemDao. You will similarly write OrderDao, but  I will leave that to you.

public interface ItemDao {

/**
* Given an item id
* return the Item object.
* @param id of the
* @return Matching Item object.
*/
public Item findById(Long itemId);

/**
* @return All items
*/
public List<Item> findAllItems();
}


We have two simple methods to find an Item by item id and findAllItems. You would indeally expand on this and write methods to delete, update, findItemByOrderId and so on. For now, let's keep things simple.

Now we will implement this dao, except we will return null from the implemented methods.

public class ItemDaoImpl implements ItemDao {

public Item findById(Long itemId){
return null;
}

public List<Item> findAllItems() {
return null;
}
}


We can now write our unit test! If you've followed part I, we setup application-context to inject dao implementation. Now we will read the application-context in order to inject that. Also I mentioned that the test methods will be annotated with @Transactional. This is to ensure that when the method returns (void), the transactions within will be rolledback. This is to ensure that our test db will remain unchanged and we can test again and again with same test data. Of course, this also means that you will need to populate your test data. So let's do that first.

Run this query in MySQL and you will have 2 rows in the ITEM table.

INSERT INTO `test_hibernate`.`item` (
`ID` ,
`PRODUCT` ,
`PRICE` ,
`QUANTITY` ,
`ORDER_ID`

)
VALUES (
NULL , 'Sony Headphones', '99.99', '3', NULL

), (
NULL , 'Logitech XZ Mouse', '15.99', '2', NULL

);


For now we don't set ORDER_ID. Now that you have two rows, we can write our tests.

@ContextConfiguration(locations={"classpath:/applicationContext.xml"})
@RunWith(SpringJUnit4ClassRunner.class)
public class ItemDaoImplTest {

@Autowired
private ItemDaoImpl dao;

@Test
@Transactional
public void testFindById()
{
Item newItem =  dao.findById(1L);
assertTrue(newItem.getProduct().equals("Sony Headphones"));
}

@Test
@Transactional
public void testFindAllItems()
{
List<Item> itemList = dao.findAllItems();
assertTrue(String.valueOf(itemList.size()).equals("2"));
}

If you run this now, your tests will fail. This is because you have not implemented the methods correctly.

So now, we implement those methods from daoImpl.

public Item findById(Long itemId){
DetachedCriteria itemCriteria = DetachedCriteria.forClass(Item.class);
itemCriteria.add(Restrictions.eq(ID_FIELD, itemId));
List<Item> itemList = findByCriteria(itemCriteria);
if(null != itemList && itemList.size() > 0)
return itemList.get(0);
return null;
}

public List<Item> findAllItems() {
DetachedCriteria itemCriteria = DetachedCriteria.forClass(Item.class);
List<Item> itemList = findByCriteria(itemCriteria);
if(null != itemList && itemList.size() > 0)
return itemList;
return null;
}

As I mentioned in part I, I will use DetachedCriteria to query our database. One subtle advantage of doing this is constructing queries is easy (to read and write) and the other major advantage would be since we are going to call this from another module most likely, for example web application's controller via service method in this module, we want the session to be started only when the dao method is called and to be cleaned up as soon as the method returns.

Run the test again, and it will pass. This takes care of testing dao. In a near future I will show you how to write tests for service methods by mocking out the db. I will also point out the advantage of doing this.

Monday, June 6, 2011

Unit Testing Hibernate Data Access Objects using JUnit 4 - Part I

In this article, I want to show you how to write unit tests for your DAOs. You would preferably use an in-memory db instance like HsqlDb but using a test db is perfectly fine since you're going to rollback each db transaction. One thing to remember while doing a DAO unit test is that you'd want to test with the db provider that you are going to use in the live, except you will use a test db instance and not the live db instance. The reason for this is that let's say you are using Hibernate just like I am doing here. You would want to test whether or not the sql queries run against the db using the specific version of hibernate works. Hibernate ships with different versions of driver classes for different db vendors, but for some reason, let's say the encoding of the db you're going to use in the live version does not support certain SQL queries generated by Hibernate. If you do an in-memory HSQLDB test and pass you will most likely think that will work with your specific version of db provider. I just don't think this is accurate enough especially if your queries are complex joins. Again, the rollback feature works for you to take advantage of and regardless of whether you are using an in-memory instance or not, you would still need to populate some data before testing. How else would you test find methods? Another advice I would like to give is to try and use accurate data. I don't mean real-values of credit cards, but data not like "AAAA" in place of a person's name. You may run into various issues later when populating your test db with such data. One such problem I can think of is if your entities are annotated with column specifications such as length and type and you have added data that may not be 100% compatible with that. Another problem is relationships between entities.

Moving on we will have these steps:

1. Pre-requisites
2. Setting up the application context
3. Writing Domain and DAO interface
4. Writing DAO unit tests
5. Writing DAO implementations

I will cover 1 and 2 in this part to have the framework in place. In the next part we will write our domain (just one) and dao interface, dao unit test and then dao implementation. This is a logical order because we would want to test first an then see what we need in order for the test to pass. That 'what we need' will go into our implementation. This is called, as you might have guessed it, Test Driven Approach.

Pre-requisites

* Spring Core library for dependency injection. We are also going to use SpringJunit4ClassRunner for unit testing.
* Hibernate 3.x. We will be using Hibernate's Criteria, specifically, Detached Criteria. For more info on using Criteria look here.
* MySQL db.

You can use Maven to configure all of these. Here's what part of the pom.xml looks like. If you need more help on configuring a maven project please look at my "How to setup a Maven Java Enterprise Application". You can find that under the category "Deployment". Here's the list of artifacts you'll need:

<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>3.3.2.GA</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-annotations</artifactId>
<version>3.3.1.GA</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-commons-annotations</artifactId>
<version>3.3.0.ga</version>
</dependency>
<dependency>
<groupId>javassist</groupId>
<artifactId>javassist</artifactId>
<version>3.6.0.GA</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jcl</artifactId>
<version>1.5.8</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.16</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${spring.framework.version}</version> <!--version 3.0.5.RELEASE -->
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.framework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context-support</artifactId>
<version>${spring.framework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring.framework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
<version>${spring.framework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>${spring.framework.version}</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.7</version>
</dependency>


Setting up the Application Context

When the test is run, it will scan the application context to inject the dao interface. The implementation of the dao interface will use Hibernate's sessionFactory to run our Hibernate queries. We will also add our single Item domain/entity to use sessionFactory. That object will be directly mapped to the ITEM table of our db. I will not create the table since this is simple enough. Lastly, we will need to use Transactions in order to rollback our unit test methods. For this, we will annotate our test methods as @Transactional. Below is the applicationContext.xml.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xmlns:tx="http://www.springframework.org/schema/tx"     xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd">

<!--  This is where the properties related to datasource are read from -->
<bean id="propertyConfigurer">
<property name="location" value="classpath:hibernate.properties" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
</bean>

<!--  Define dataSource to use -->
<bean id="dataSource">
<property name="driverClassName" value="${hibernate.jdbc.driver}" /> <!-org.gjt.mm.mysql.Driver -->
<property name="url" value="${hibernate.jdbc.url}" />
<property name="username" value="${hibernate.jdbc.user}" />
<property name="password" value="${hibernate.jdbc.password}" />
</bean>

<!--  The sessionFactory will scan the domain objects and their annotated relationships. -->
<bean id="sessionFactory">
<property name="dataSource" ref="dataSource" />
<property name="annotatedClasses">
<list>
<value>com.company.application.core.domain.Item</value>
..............
</list>
</property>
<property name="schemaUpdate" value="true" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.connection.isolation">2</prop>
<prop key="hibernate.bytecode.use_reflection_optimizer">true</prop>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop>
<prop key="hibernate.jdbc.batch_size">20</prop>
<prop key="hibernate.max_fetch_depth">2</prop>
<prop key="hibernate.show_sql">true</prop>
<prop key="hibernate.format_sql">true</prop>
</props>
</property>
</bean>

<!--  Define Transaction Manager. We will use Hibernate Transaction Manager. -->

<bean id="transactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<!--  We will set transactional properties with annotation -->
<tx:annotation-driven />
<bean id="itemDao">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
</beans>


One thing you might have noticed is that I could easily annotated my Dao as @Resource, have it scanned and not defined in the xml above. That is perfectly legal. Now we're set to write our domain, dao interface, dao test and dao implementation.

Friday, June 3, 2011

Spring Batch: the object oriented way

Recently, I was working on a project where we had to re-write a vb application in java. The application would read from database invoice numbers based on an xml config logic (like range of invoices or invoices for a date range and so on), and for each invoice it would create and write in separate xml and excel files based with filenames being the same as invoice numbers. That sounds simple enough right? So why am I talking about sprng batch? Here's the interesting part of the app - for every invoice there is a BillingOrder (one to one relationship), for every BillingOrder there is 1 to N relationship with Shipping, for every Shipping there is another 1 to N relationship with Products and believe me for every Product there is a UserDetail.  Now what was so special (or lack there of) about this vb application was that it for each invoice it read, it created a new xml and worte the invoice, closed it, did that for n number of invoices. For BillingOrder it opened (not created) the matching invoice, appended data to it, closed it, did that for exactly n number of BillingOrder. You see where I am going with this?

Now immediately you would acknowledge that Spring Batch is the way to go. Problem solved? I wish. The problem was even more compounded by the fact the we had to turn this module in, in a very short period and while they would appreciate if we used Spring Batch, it really was just a re-write. But the heroes that we were (and still are ;)) we just had to use Spring Batch . So here's our little story below.

If you read the Spring Batch guide (oh we used the 2.1.8 version), you will find workable examples of Reading and Writing in chunks. For each Reader there is a Writer. I am not saying you have to read once and write once. That would not make sense of using a batch application, now would it? You can read 1..100 or n times from a source (db or text or xml), generally use a domain object to populate it and when your threshold for chunking is reached (1..100 or n times) your writer wrties them out to db or text or xml and most importantly (at least to me) sets those objects (1..100 or n number of them) to null. GC happens when it does. Anyways, for our application we would then have these options:

  • Usung domain objects as DTOs. In our job's first step,Read x number of invoices from the db and send them over (application does that for you) to the writer and have them written out one by one, creating new invoice xmls with invoice numbers as name. Use a stepListener to send invoice number list to the next step. In that next step, read exactly x number of BillingOrders from another table in the db and send them over to another writer (remember I said 1 reader : 1 writer) and open each invoice and append to them the BillingOrder values as childNodes. The next step would then be sending BillingOrder ids using another stepListener so that Shipping details can be written. Perfect? Well it gets tricky here. This is 1 to N relationship. So whether or not you send the Shipping details in a sorted order - meaning <Shipping> lists with BillingOrder 1 first, then with BillingOrder 2... you will still have to open the xml files shippingList.size()number of times. You might tweak the code a little bit to stop this from happening but still.. for x number of invoices you are opening at least  (x times 5) xml files. Good solution? Hardly.

  • Option 2. The object oriented way. The right way. Now instead of using our domains as dumb Data Transfer Objects we are going to implement relationships in them. Invoice now has BillingOrder billing (why not the other way? I will explain below). BillingOrder will have a List<Shipping> shippingList (use Set if you want to make sure they are unique), Shipping has List<Product> productList and Product has UserDetail  userDetail. Now our invoiceReader will also change. First of all we will need only 1 reader because we want only 1 writer instance associated with 1 xml ouput file. So we don't use any StepListeners either. Consequently, only 1 step in this job. What we will do is read out all the invoices from the db into a list of invoice objects. Loop through the invoice object list, populating billingOrder instance variables of each invoice object (this is why I had billingOrder inside invoice object. Invoice has to know about BillingOrder). At the same time create another loop to populate each billingOrder of Invoice with Shipping. You will have multiple loops but at then end, you will be able to send 1 invoice object or 100 or n number of invoice objects with all the relationsips intact to the writer. The writer will only create/open 1 xml for 1 complete invoice.  If this seems like a memory issue you can reduce the chunk threshold.  


If you've thoroughly read this, you might have one question. Why didn't Jason Kidd shoot this well early in his career? Kidding. You would be asking "Where is the config file read?" What you need to understand about steps is that there is only one instance of reader and exactly one instance of writer for each step execution. For example, there are 500 rows of invoices and you set the chunk threshold to 100. When the step starts, a new instance of reader (InvoiceReader) is created. After 100 have been read and sent for writing, the same invoiceReader instance is used again and next hundred is sent. How else would it know to process the next 100? This happens 5 times and on the sixth occasion, a null is sent (not an exception, you really need to send null) to mark the end of the step. Since only one instance of reader (and writer) exists, you can create an instance level variable in the reader like boolean isFirstRead and set it to true on the first instance read. The code to read the config file will be within this if condition. Then in the recurring cycles, this config file will not be read. And you would not start from the top.

So that as they say is that. I wanted to write pseduo code to explain the loops but I kept on writing and didn't realize that I wasn't.

Wednesday, June 1, 2011

Creating and consuming a EJB 3 Message Driven Bean Part II

In part I we did all the dirty work - setup JMS connection factory and JMS destination in the application server and then created a stateless session bean that sends message to our XYZ Warehouse application. One thing I forgot to mention is that you'd probably be asking "where is the messaging server?". You might have heard of Active MQ or Webshpere MQ. We're using messaging server included within the AS. So then let's get to creating our MDB.

Create our Message Driven Bean
MessageDrivenBean is just like our stateless bean in the sense it does not retain state. For you and I this means, do not write any code like transaction related code. In the event of any exception you will not be able to recover from this. Little bit confused? Simply understand that it is not your shopper who will have started this transaction for the exception to be returned to the client to handle it. Maybe I should have an article on Transactions. :)

The first thing to do is Annotate the Class with @MessageDriven which tells the AS that this bean is an MDB. You will also have to specify that you want to use the destination "jms/InvoiceQueue". Refer to the source below.  You can then implement MessageListener interface and when you do that you implement void onMessage(Message message) method. Basically, this method encapsulates your business logic or what type of and what message to consume. As with any EJB, you will need an associated EntityManager context to do your CRUDing. But that really is not where I want to focus. What you should get from the code below is that we know we are consuming an ObjectMessage that was sent originally from the producer app. And I'd pointed out that we need an object similar to or at least capable of storing the properties of the Object (Item). We have ItemOrder in this client app to do so.

import com.factory.entities.ItemOrder;
import java.util.ArrayList;
import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.ObjectMessage;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;

@MessageDriven(mappedName = "jms/InvoiceQueue", activationConfig = {
    @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})
public class InvoiceMessageBean implements MessageListener {

    @PersistenceContext
    EntityManager em;

    public InvoiceMessageBean() {
    }

    public void onMessage(Message message) {
        try {            
            ObjectMessage objectMessage = (ObjectMessage) message;
            ArrayList list = (ArrayList)objectMessage.getObject();
            ItemOrder itemOrder = new ItemOrder();
            itemOrder.setBarcode(Integer.parseInt(list.get(0).toString()));
            itemOrder.setTotalItemsToOrder(Integer.parseInt(list.get(1).toString()));
            itemOrder.setCompanyID(19284);
            em.persist(itemOrder);
        } catch (JMSException jmse) {
            jmse.printStackTrace();
        }
    }
}


So that's it. You're now ready to create and consume your EJB 3 MDB. You should be able to appreciate all the asynchronous nature of MDB! Enterprise JAVA rules!

Creating and consuming a EJB 3 Message Driven Bean Part I

"Finally, The Rock has come back to...." More like finally, I am going to write an article (albeit two parts article) about programming, if you don't regard SQL as programming. In the first part I am going to write about how to send message to a messaging server including creating a new JMS resource. I used glassfish v2.1 for this. You can use do the same on your preferred application server. In the next part I will write about consuming the message via the MDB. So let's begin. I have created a checklist of functions we need to perform in order deploy our MDB and consume it.

Checklist:


  1. Create a new JMS connection factory to allow creation of JMS objects in the application server. (Messaging server)

  2. Create a new JMS Destination that will be repository for messages sent. (Messaging server)

  3. Use/Create a  session bean to send the message. (Producer Application)

  4. Create our MessageDrivenBean to consume it. (Consumer Application)


Create a new JMS connection factory to allow creation of JMS objects in the application server.
Before we start with creating our MDB and consumer, the first thing I suggest doing is to create a new JMS connection factory in your application server. Think of JMS connection factory as JDBC connection pool. Your JDBC connection pool creates a pool of connection and whenever your application makes a call to get a new connection object, the pool serves it. When you're done the connection object is returned to the pool and if in active and not invalid state, it can serve another connection request. Using pool is just a lot faster and the application server is responsible for managing it. Of course you are responsible for closing the connection so that it can be returned to the pool. We have the option of using javax.jms.TopicConnectionFactory, javax.jms.QueueConnectionFactory or simply javax.jms.ConnectionFactory. Even though we're going to be consuming queues (why? I will explain the difference between a topic an queue) staying true to Abstract Factory Pattern, we will use the inteface ConnectionFactory. Have a look at the image below. I've created a new JMS Connection Factory with JNDI name jms/InvoiceQueueFactory. I've left the Pool Settings to default AS settings.

[caption id="attachment_64" align="alignnone" width="300" caption="JMS Connection Factory"]JMS Connection Factory[/caption]

Create a new JMS Destination that will be repository for messages sent
Now that you've created the Connection Factory we are ready to create a JMS Destination. JMS Destination is where the messages that are sent from the MDB are stored. There are two types of destinations

  • Queue - For point-to-point communication.

  • Topic - For publish-subscribe communication.


What messaging paradigm you want to use is dependent on what your business model is. In the sample MDB, whenever the inventory level of an item in an ABC store reaches below a certain threshold we are going to send an order request  to an XYZ warehouse for the item.

Let's say an instance of ItemOrder class/entity in the Warehouse client app has the following properties

  • int barcode;

  • int totalItemsToOrder;

  • String companyID;


And an instance of Item class/entity in the ABC producer app has the following fields

  • int barcode;

  • String name;

  • double price;

  • int minQuantity;

  • int totalItemsInStock;


Now what happens is when a shopper buys an item with barcode 3884994 from the shop, the totalItemsInStock drops below the minQuantity. What we want to do now is to send a message to the messaging server so that at a later time (may be 5 secs from now or 2 days from now) the Warehouse Application consumes this and sends an packaging and shipping order to its distribution vendor. What is important here is that one and only one message needs to be sent to warehouse. Otherwise, they'd end up sending more than what you need for your store. We would also like to make the message as durable. But this is a configuration in the MDB itself. Later I will discuss this. Anyways, the point is, we need to send a point-to-point message with specific  item barcode and our company ID (their system requirement, apparently) and make sure that message stays there until it is consumed (Hopefully the message does not expire).

You'd use Publish-subscribe when this is not a requirement. Generally, there is a one to many relationship between publisher and subscriber. This won't make sense in our scenario here. So Queue it is!

Creating this is easy. Just use another jndi name like I've done below and specify the type as Queue.

[caption id="attachment_65" align="alignnone" width="300" caption="JMS_Destination_Resource"]JMS_Destination_Resource[/caption]

Use/Create a  session bean to send the message. (Producer Application)
As discussed above, whenever the inventory level drops to below the threshold for that Item, we need to send a message to the Warehouse system via our messaging server to request new orders. What we want to send is our ItemOrder object. I think the valid types are TextMessage, BytesMessage, StreamMessage, ObjectMessage, MapMessage. Please check the API for more info on this. As you may have guessed it, we're going to use ObjectMessage. The caveat here is that you need to have a similar object in the Warehouse system, or at least one with the properties in ItemOrder object. You get the point.

So without further ado, below is a stateless bean that sends a request to the messaging server when the invoice is saved (saveInvoice), if the inventory level drops below the min threshold.

import com.store.entities.Customer;
import com.store.entities.Invoice;
import com.store.entities.Item;
import java.util.ArrayList;
import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.annotation.Resource;
import javax.ejb.EJB;
import javax.ejb.Stateless;
import javax.jms.JMSException;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.ejb.Remove;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.MessageProducer;
import javax.jms.ObjectMessage;
import javax.jms.Session;

@Stateless
public class InvoiceBean implements InvoiceRemote {

    private Invoice invoice = new Invoice();
    @EJB
    private ItemRemote itemService;
    @EJB
    private CustomerRemote customerService;
    @PersistenceContext
    EntityManager em;
    @Resource(name = "jms/InvoiceQueueFactory")
    private ConnectionFactory connectionFactory;
    @Resource(name = "jms/InvoiceQueue")
    private Destination destination;
    private int cartTotal;

    public void addItem(int barcode) {
        Item it = new Item();
        it = (Item) itemService.findItem(barcode);
        if (it.getQuantity() < 1) {
            System.out.println("No item available..........");
        } else {
            it.setQuantity(it.getQuantity() - 1);
            it = (Item) itemService.updateItem(it.getId(), it.getName(), it.getQuantity(), it.getPrice(), it.getBarcode(),
                    it.getMinQuantity(), it.getImage(), it.getItemsToOrder(), it.getShippingCost());
            if(getInvoice().getItems()==null){
                List<Item> items = new ArrayList<Item>();
                items.add(it);
                getInvoice().setItems(items);
                this.setCartTotal(1);
            }else{
                getInvoice().getItems().add(it);
                this.setCartTotal(this.getCartTotal()+1);
            }
            getInvoice().setTotalCost(it.getPrice()+getInvoice().getTotalCost());
        }

    }

    public void removeItem(int barcode) {
        Item it = itemService.findItem(barcode);
        it.setQuantity(it.getQuantity() + 1);
        itemService.updateItem(it.getId(), it.getName(), it.getQuantity(), it.getPrice(), it.getBarcode(),
                it.getMinQuantity(), it.getImage(), it.getItemsToOrder(), it.getShippingCost());
        getInvoice().getItems().remove(it);
        getInvoice().setTotalCost(getInvoice().getTotalCost()- it.getPrice());
        this.setCartTotal(this.getCartTotal()-1);
    }

    @Remove
    public void saveInvoice() {
        em.persist(getInvoice());
        for (Item i : getInvoice().getItems()) {
            Item it = new Item();
            it = itemService.findItem(i.getBarcode());
            if (it.getQuantity() <= it.getMinQuantity()) {

                try {
                    Connection connection = connectionFactory.createConnection();
                    Session session = connection.createSession(true,
                            Session.AUTO_ACKNOWLEDGE);
                    MessageProducer producer = session.createProducer(destination);
                    ObjectMessage message = session.createObjectMessage();
                    ArrayList list = new ArrayList();
                    list.add(it.getBarcode());
                    list.add(it.getItemsToOrder());
                    message.setObject(list);
                    producer.send(message);
                    session.close();
                    connection.close();
                } catch (JMSException ex) {
                    Logger.getLogger(InvoiceBean.class.getName()).log(Level.SEVERE, null, ex);
                }
            }
        }
    }

    @Remove
    public void cancelInvoice() {
        this.setCartTotal(0);
        for (Item i : getInvoice().getItems()) {
            Item it = itemService.findItem(i.getBarcode());
            it.setQuantity(it.getQuantity() + 1);
            itemService.updateItem(it.getId(), it.getName(), it.getQuantity(), it.getPrice(),
                    it.getBarcode(), it.getMinQuantity(), it.getImage(), it.getItemsToOrder(), it.getShippingCost());
        }
        setInvoice(new Invoice());
        setCartTotal(0);
    }

    public void addCustomer(int customerID) {

        Customer cust = new Customer();
        cust = (Customer) customerService.findCustomer(customerID);
        System.out.println(cust.getEmail());
        getInvoice().setCustomer(cust);
    }

    /**
     * @return the invoice
     */
    public Invoice getInvoice() {
        return invoice;
    }

    /**
     * @param invoice the invoice to set
     */
    public void setInvoice(Invoice invoice) {
        this.invoice = invoice;
    }

    /**
     * @return the cartTotal
     */
    public int getCartTotal() {
        return cartTotal;
    }

    /**
     * @param cartTotal the cartTotal to set
     */
    public void setCartTotal(int cartTotal) {
        this.cartTotal = cartTotal;
    }
}


As you can see the stateless bean performs its other business methods and when it finally (@Remove to remove its instance and return it to the pool of stateless beans)  savesInvoice, it sends a message to the messaging server, only if the inventory level drops below the minQuantity.

So this takes care of part I. In Part II I will write the Warehouse's MDB.


Tuesday, May 31, 2011

SQL Profiler and Database Tuning Advisor and optimizing the db server

About a year and half ago, I'd done some work on tuning my production database. The db was SQL server 2005 but what I will write below should work for SQL server 2k8 as well.

My notes on using SQL Profiler and Database tuning advisor(err..tips if you will):

  • Common columns to use are TextData, Duration, CPU, Reads, Writes, ApplicationName, StartTime and EndTime.

  • Do not trace to table. If you want in a table, import it.

  • Right-click on column to apply filter starting with that column.

  • Not all events within a group are important.

  • EventClass and SPID columns cannot be unselected. EventClass cannot be selected either.

  • Do not use on the PC where the database resides. Use Profiler from a different PC.

  • If your sever is busy do not check server processes trace data.

  • Turn Auto Scroll off to monitor a previous event without being scrolled to the bottom.

  • Bookmarking is useful to identify which even to look at a later time.

  • In order to minimize the load on the SQL server, reduce the number of events traced/captured.

  • The same goes with data columns.

  • Useful events to track slow running stored procedures are RPC:Completed, SP:StmtCompleted, SQL:BatchStarting, SQL:BatchCompleted and ShowPlan XML.

  • Useful data columns are Duration, ObjectName, TextData, CPU, Reads, Writes, IntegerData, DatabaseName, ApplicationName, StartTime, EndTime, SPID, LoginName, EventSequence, BinaryData.

  • Testing for which queries run frequently and storing that to trace table. This should be found out from the production server.
    SELECT [ObjectName], COUNT(*) AS [SP Count]
    FROM [dbo].[Identify_query_counts]
    WHERE [Duration] > 100
    AND [ObjectName] IS NOT NULL
    GROUP BY [ObjectName]
    ORDER BY [SP Count] DESC

  • Testing for deadlocks use events like Deadlock graph, Lock: Deadlock, Lock: Deadlock Chain, RPC: Completed, SP: StmtCompleted, SQL: BatchCompleted, SQL: BatchStarting.

  • Useful data columns are TextData, EventSequence, DatabaseName.

  • Testing for blocking issues use event BlockedProcessReport but also use this:
    SP_CONFIGURE 'show advanced options', 1 ;
    GO
    RECONFIGURE ;
    GO
    SP_CONFIGURE 'blocked process threshold', 10 ;
    GO
    RECONFIGURE ;
    GO//do this to turn it off
    SP_CONFIGURE 'blocked process threshold', 0 ;
    GO
    RECONFIGURE ;
    GO

  • Useful Data Columns are Events, TextData, Duration, IndexID, Mode, DatabaseID, EndTime

  • For production environment set the value of threshold to 1800 (30 mins) and be sure to turn off.

  • Testing for excessive index/table scans, use events like Scam:Started along with RPC:Completed, SP:StmtCompleted, SQL:BatchStarting, SQL:BatchCompleted and Showplan XML.

  • Useful Data columns are ObjectID, ObjectName, Duration, EventCall, TextData, CPU, Reads, Writes, IntegerData, StartTime, EndTime, EventSequence and BinaryData.

  • DTA: Provide representative workload in order to receive optimal recommendations.

  • Only RPC:Completed, SP:StmtCompleted and SQL:BatchCompleted.

  • Data Columns used are TextData, Duration, SPID, DatabaseName and LoginName.

  • Check Server Processes Trace Data to capture all trace events.

  • Run traces quarterly or monthly to feed to DTA to ensure indexes are up to date.

  • Create baseline traces to compare traces after indexing to check which queries run most often and their average duration.

  • Run only one trace at a time.

  • Do not run Profiler when running DB backup.

  • Set the Trace Off time when you run trace.

  • Run DTA at low traffic times.

  • Reduce the use of cursors in the application. Those that are in jobs or not in use can be ignored if they execute on time and on those hours where the application is least accessed.

  • Index created without the knowledge of queries serve little purpose.

  • Sort the trace (tuning) by CPU Reads. This gives the costly query.


Basically there are different parameters to look at. Firstly, I optimized those queries that are most frequently used by creating indexes and where possible re-writing them looking at query execution path. Then since I know these indexes are going to be fragmented when data gets updated or deleted from the tables in question, I setup a defragmentation plan as a job in the db server. Those indices that had fragmentation between 0 and 20 were left untouched, between 20 and 40 were re-organized and those above 40 were re-built.

Secondly, I also examined any queries or stored procedures that were hogging CPU, meaning not responding and causing other queries to wait for it to complete. There was one that I found that was not written very well. So I re-wrote it.

After that, I checked other server parameter to see if the server actually meets the standard. Such parameters are 'Memory -> pages/sec' and 'Memory->Available bytes'.  We had 32-bit processor so we couldn't upgrade the RAM only. We had to upgrade it to 64-bit server with initially 8gb RAM enabling 3GB of process space. The reason for upgrade to only 8gb was of course we want to see gradual performance improvement.

Then, I adjusted connection pooling parameters of my application server (Jboss 4.0.3 sp1) by doing a lot of load testing. I think I should have an article on that later. It was setup with Apache forwarding all the non-static (images and html) requests to Jboss. I won't dwell on this too much for now.

Lastly, all the developers in team focuses their attention to checking the source code to see if connections were being opened and closed properly. The application was using JDBC and this was quite a tedious task. We'd even managed to write code to flush connection after it reached a certain inactive threshold and log whenever it did that. I know most dba's would ask to do this step first, but either way our queries and db server needed optimization/upgrade.

There was an emailing app which sent newsletters to over 150k users at the time. It used to execute normally from 5-6hrs depending on the traffic on the application. That drastically dropped to less than an hour! :)

References:

Query to get the top 20 most executed queries in the database

SELECT TOP 20 SUBSTRING(qt.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.text) ELSE
qs.statement_end_offset
END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads,
qs.last_logical_reads,
qs.min_logical_reads, qs.max_logical_reads, qs.total_elapsed_time, qs.last_elapsed_time,
qs.min_elapsed_time, qs.max_elapsed_time, qs.last_execution_time, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qt.encrypted=0
ORDER BY qs.total_logical_reads DESC


Query to identify wait times

Select top 10 *
from sys.dm_os_wait_stats
ORDER BY wait_time_ms DESC


The job to set defragmentation logic

USE [xxxx]--Your Db name
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[sp_IndexDefrag]
AS
DECLARE @DBName NVARCHAR(255),
        @TableName NVARCHAR(255),
        @SchemaName NVARCHAR(255),
        @IndexName NVARCHAR(255),
        @PctFrag DECIMAL,
        @Defrag NVARCHAR(MAX)

IF EXISTS (SELECT * FROM sys.objects WHERE object_id =  object_id(N'#Frag'))
    DROP TABLE #Frag

Create table #Frag
        (DBName NVARCHAR(255),
         TableName NVARCHAR(255),
         SchemaName NVARCHAR(255),
         IndexName NVARCHAR(255),
         AvgFragment DECIMAL)

EXEC sp_msforeachdb 'INSERT INTO #Frag(DBName,
                                       TableName,
                                       SchemaName,
                                       IndexName,
                                       AvgFragment)
                     Select ''?'' As DBNAME,
                            t.Name As TableName,
                            sc.Name As SchemaName,
                            i.name As IndexName,
                            s.avg_fragmentation_in_percent
                     FROM

?.sys.dm_db_index_physical_stats(DB_ID(''?''),NULL,NULL,NULL,''Sampled'') As s
                     JOIN ?.sys.indexes i
                     ON s.Object_Id = i.Object_id
                        AND s.Index_id = i.Index_id
                     JOIN ?.sys.tables t
                     ON i.Object_id = t.Object_id
                     JOIN ?.sys.schemas sc
                     ON t.schema_id = sc.SCHEMA_ID
                     WHERE s.avg_fragmentation_in_percent > 20
                     AND t.TYPE = ''U''
                     AND s.page_count > 8                    
                     ORDER BY TableName, IndexName'

                     DECLARE cList CURSOR FOR
                     SELECT * FROM #Frag
                     where DBName = 'XXXX' --your db

                     OPEN cList
                     FETCH NEXT FROM cList
                     INTO @DBName, @TableName, @SchemaName, @IndexName, @PctFrag

                     WHILE @@FETCH_STATUS = 0
                     BEGIN
                          IF @PctFrag BETWEEN 20.0 AND 40.0
                          BEGIN
                               SET @Defrag = N'ALTER INDEX ' + @IndexName + ' ON ' +

@DBName + '.' + @SchemaName + '.' + @TableName + ' REORGANIZE'
                               EXEC sp_executesql @Defrag
                               PRINT 'Reorganize index: ' + @DBName + '.' + @SchemaName +

'.' + @TableName + '.' + @IndexName
                          END
                          ELSE IF @PctFrag > 40.0
                          BEGIN
                               SET @Defrag = N'ALTER INDEX ' + @IndexName + ' ON ' +

@DBName + '.' + @SchemaName + '.' + @TableName + ' REBUILD'
                               EXEC sp_executesql @Defrag
                               PRINT 'Rebuild index: ' + @DBName + '.' + @SchemaName + '.'

+ @TableName + '.' + @IndexName
                          END

                          FETCH NEXT FROM cList
                          INTO @DBName, @TableName, @SchemaName, @IndexName, @PctFrag
                    END
                    CLOSE cList
                    DEALLOCATE cList

                    DROP TABLE #Frag


So that's it. ';) I know this is very long for a post, but trust me, your work takes days if not weeks. And optimization is an on-going process. You cannot sit back and relax once you do it the first time. 

 

Setting up Maven Enterprise Application Part II

Part II: Setting up individual modules of an EAR






If you've been following along, this is part 2 of two part series in setting up Maven Enterprise Application. Here is part I which shows you how to use maven to setup the overall project structure, configure build path, add module dependencies and libraries. In this second part, I will show you how to structure the individual modules, the JAR and WAR. Since I've assumed that you're using Eclipse/RAD or any other IDE for that matter, I've already discussed how to add WAR to the EAR in part I. From the IDE this is basically adding WAR module dependency to EAR.

Overall View
Before I begin, please see the image below from RAD to see the overall structure. For some of you, this should be enough. You can get this view from Enterprise Explorer in RAD or Project Explorer in Eclipse. Some of you may prefer to use Navigator window. But I am more comfortable with the former two.

[caption id="attachment_53" align="alignnone" width="211" caption="Module Structure"]module_structure[/caption]

JAR Module

The JAR Module will, as I mentioned in Part I, house your model and unit tests for the models. Your model will consist of domains, daos which are your db technology dependent interface and services which are your business interfaces and not underlying db specific. Additionally, you may have custom exception or helper classes.

Your source folder will be under src/main/java. Any resources like perhaps an applicationContext file will be added to src/main/resources. In part I, I mentioned that these locations are added to the classpath. So to read any resource from there you'd write something like classpath*:applicationContext.xml.
Your unit tests (unit tests for JAR module and integration tests for WAR module) will be housed under src/test/java and any resource like applicationContext file will be housed under src/test/resources.

Basically, if you use an ORM tool your domains or entities define relationships and constraints. These domains are package under com.yourComanpyName.applicationName.domain package. You don't write tests for domain objects. Your dao interfaces are packaged under com.yourComanpyName.applicationName.dao and the implementations are packaged under com.yourComanpyName.applicationName.dao.impl. If you're using spring for dependency injection, you'd annotate the implentations as @Repository or define a bean in the application_context.xml file. You would then be able to Autowire your daos to your service and service unit tests. Now daos will have to be unit tested. Therefore you'd create a new package for the tests as com.yourComanpyName.applicationName.dao.impl (same package) under src/test/java. I am not going into details of writing unit tests and daos or any code in this two part series. I will have articles on them later. Now your service methods will be structured similarly to your daos under com.yourComanpyName.applicationName.service and com.yourComanpyName.applicationName.service.impl. You'd want to use DI to initialize them via annotation or xml the same way you'd do for daos because these service methods will be called from WAR module. Your tests for service will go under com.yourComanpyName.applicationName.service.impl under src/test/java. For unit testing you'd want to use a mocking tool like EasyMock or Mockito. You want to mock out the service tests because you do not want to tie up your service to a specific dao implementation. For example, tomorrow you might change from hibernate to using iBatis or JDBC.

So this is basically it as far as your JAR module goes.

WAR Module

The war module will follow the same naming convention as the JAR module. Your sources will be housed under src/main/java and the resources (not web.xml) will be housed under src/main/resources. You can have a folder called WebContent (exists by default in RAD's dynamic application) under which you will have your WEB-INF folder for your web descriptor file (web.xml), static folder for your pages, scripts, css and images. Your tests for source files will be under src/main/test as usual.

Suppose you're using Struts 2, you would then have the action (controllers in Struts 2) classes under com.yourComanpyName.applicationName.web.action package. This is sticking to the naming convention. Your actions will call your injected service interfaces from the JAR module. Your applicationContext file will almost always be the same as what you defined in the JAR module.

Now you'd want to do integration tests in WAR module. The package to create would be the same as for your Action classes (or controllers) but under src/main/test. For integration testing, you'd use a tool to mock out the HttpServletRequest and Session. You can also mock out the db as you did in the service unit tests in the JAR module. That way you are not dependent on dao implementation.

The only thing remaining is to add JAR module as dependency to the WAR module. This I explained in part I. But it is pretty straightforward from the IDE.

You are now all set to start working on your EAR project.

Final Note: I have not covered unit testing or integration testing in any detail for you to be able to go ahead and start writing them. The purpose was only to show you how to setup an EAR using maven. I will have articles related to them later. But the good part is that I won't have to explain the folder structures then. I can always refer to these two articles :)

 

Monday, May 30, 2011

Setting Up Maven Enterprise Application Part I

Part I: Setting up maven and overall structure






This will be the first of 2 part series where I would like to show how to quickly setup a Java Enterprise Application. In this part you will setup the overall project structure, configure your libraries, build path and module dependencies. I use RAD/Eclipse with m2Eclipse plugin which you can find here for development but the configuration is independent of what IDE or text editor you choose to use. So let's get started.

Pre-requisites:

  • Maven 2. I used maven version 2.0.9. For a higher version of maven, please refer to maven documentation.

  • Eclipse/RAD. Although you can do this without an IDE, it will nonetheless be easier to configure in an IDE since this is an Enterprise Application.

  • JDK 1.4 or higher. I prefer JDK 5 or higher.


Overall structure
The enterprise application that I am going to setup will have a JAR module for services and DAOs, a WAR module that will import the JAR module and an EAR module that will include the WAR module. If you wish to add more modules, you can choose similar structure. You could have only WAR and JAR modules in eclipse, but it is preferable to house them within an EAR. Also RAD requires you to have a deployable EAR. Our WAR module will be deployed. To make this a maven project will have also have a parent-pom (which is not a module) sit above all three modules above that will package them together.

Look at the structure below to see how the structure will look like:
Overall Structure

Setting individual modules starting with parent-pom
When you look at the image above, you will see three files: pom.xml, .project, .classpath. All three are important to configure. The IDE will generate your .classpath and .project files but I will nevertheless go through all of them to ensure that you understand what the IDE is generating. Also you can the .settings folder. This folder is specifically related to parent-pom. This folder will contain additional project/IDE related configurations. But I will not go through this because I don't want to make this too long. Also, the configuration for individual modules will be similar to what you will do for parent-pom. If you understand how I am going to configure these three files, you will easily be able to configure the rest.
So let's begin with pom.xml. Pom files are read by maven in order to do the following:

  • Package modules. In this parent pom, you will package all three modules. The package name will be their folder names. In the EAR's pom, you would add the WAR module and in the WAR the JAR module. The JAR module will not have any module dependency.

  • Define versioning, application description, url, name and so on. This is self explanatory. However, you'd want version to be consistent across all modules.

  • Dependencies. You will define all the artifacts and their version that you'd use in your application. In this parent pom, you'd define all artifacts that are common to more than one module. For e.g you'd define log4j here since you'd need logging in both the JAR and WAR module. Same can be said of JUnit and Spring core libraries.

  • Developers and Contributors. Optional. List out the developers/contributors and their roles.

  • Plugin Lists. This lets you define your goals for you build process. For example, you'd use cobertura to see what percentage of your source code is unit tested.

  • Reporting List. Optional. If your need is to generate reports, you'd use this. Again, as I mentioned you'd want to get cobertura report, here's where you define cobertura-maven-plugin.

  • Repositories. A repository is where you'd pull all the artifacts from. Some private repos need authentication, but most don't. While your default settings.xml of maven is a good place to define your repositories, this is also another place to do so.


Please look at the code below for more info:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<!--
This is the 'parent' POM (Project Object Model) which will have the following
nodes inherited by any POM which declared the <parent/> node to point to this
POM. Please note: This is not the 'super POM' which is supplied by Maven itself.
The super POM has its values inherited by all POMs.

* dependencies
* developers and contributors
* plugin lists
* reports lists
* plugin executions with matching ids
* plugin configuration

@author yourName
@version 1.0
-->

<!--
The POM version.
-->
<modelVersion>4.0.0</modelVersion>

<!--
The organization that is creating the artifact. The standard naming convention
is usually the organizations domain name backwards like the package name in Java.
-->
<groupId>com.yourCompanyName.ApplicationName</groupId>

<!--
The artifact name. This will be used when generating the phsyical artifact name.
The result will be artifactId-version.type.
-->
<artifactId>parent-pom</artifactId>

<!--
The type of artifact that will be generated. In this case no real artifact is
generated by this POM, only the sub projects.
-->
<packaging>pom</packaging>

<!--
The version of the artifact to be generated.
-->
<version>0.0.1-SNAPSHOT</version>

<!--
The name of the project to be displayed on the website.
-->
<name>Your Application Name</name>

<!--
The description of the project to be displayed on the website.
-->
<description>
Description of your app
</description>

<!--
The url of the project to be displayed on the website.
-->
<url>http://www.WARModuleURL.com</url>
<!--
This project is an aggregation/multi-module which includes the following
projects. Please note: the value between the module node is the folder
name of the module and not the artifactId value.
-->
<modules>
<module>AppNameJAR</module>
<module>AppNameWAR</module>
<module>AppNameEAR</module>
</modules>

<!--
This segement list the inherited dependencies for each child POM.
-->
<dependencies>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>1.8.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.4</version>
<scope>test</scope>
</dependency>
......
</dependencies>

<!--
The following node defines the developers that are working on the project,
their roles and contact information. This will be used when the site is
generated for the project. (mvn site).

* id - The id of the developer.
* name - The display name that will be used for the display name under 'Project Team' of the website.
* email - The e-mail address of the team member which will be displayed.
* roles - A list of roles the member fulfills.
* organization - The organization of the developer.
* timezone - The timezone of the developer.
-->
<developers>
<developer>
<id>12344</id>
<name>Your Name</name>
<email>yourEamil</email>
<organization>
ABC company INC
</organization>
<organizationUrl>http://www.ABC_Comapnycom</organizationUrl>
<roles>
<role>Technical Leader</role>
</roles>
<timezone>+5:45</timezone>
</developer>
</developers>

<!--
The following node defines the contributors that are working on the project,
their roles and contact information. This will be used when the site is
generated for the project. (mvn site).

* name - The display name that will be used for the display name under 'Project Team' of the website.
* email - The e-mail address of the team member which will be displayed.
* roles - A list of roles the member fulfills.
* organization - The organization of the developer.
* timezone - The timezone of the developer.
-->
<contributors>
<contributor>
<name>SomeName</name>
<email>SomeEmail</email>
<organization>
ABC company INC
</organization>
<organizationUrl>http://www.ABC_Comapnycom</organizationUrl>
<roles>
<role>Engineering Manager</role>
</roles>
<timezone>+5:45</timezone>
</contributor>
...
</contributors>
<!--
Each POM file is a configuration file for the build process. There are many plug-ins
for adding new steps in the build process and controlling which JDK is being used.
Below we customize the version of the JDK as well as some code inspection tools like:

1. Cobertura
-->
<build>
<plugins>
<!--
Configure the maven-compiler-plugin to use JDK 1.5
-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
<fork>true</fork>
</configuration>
</plugin>
<!--
Configure Cobertura to ignore monitoring the apache log4j
class.
-->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.0</version>
<configuration>
<instrumentation>
<ignores>
<ignore>org.apache.log4j.*</ignore>
</ignores>
</instrumentation>
</configuration>

<!--
The following controls under which goals should this
plug-in be executed.
-->
<executions>
<execution>
<goals>
<goal>clean</goal>
<goal>cobertura</goal>
</goals>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>2.0.1</version>
<configuration>
<findbugsXmlOutput>true</findbugsXmlOutput>
<includeTests>false</includeTests>
<skip>true</skip>
</configuration>

</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptors>
<descriptor>assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>

</plugins>
</build>

<!--
Maven can look at various repositories to locate dependencies that
need to be downloaded and placed into the local repository. In the
below configuration, we enable codehaus, apache, and opensymphony
repositories.
-->
<repositories>
<repository>
<id>snapshots-maven-codehaus</id>
<name>snapshots-maven-codehaus</name>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
<url>http://snapshots.maven.codehaus.org/maven2</url>
</repository>
<repository>
<id>Maven Snapshots</id>
<url>http://snapshots.maven.codehaus.org/maven2/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</repository>
<repository>
<id>spring-s3</id>
<name>Spring Portfolio Maven MILESTONE Repository</name>
<url>
http://s3.amazonaws.com/maven.springframework.org/milestone
</url>
</repository>
...
</repositories>

<!--
For the reporting area of the website generated.

1. JavaDoc's
2. SureFire
3. Clover
4. Cobertura
5. JDepend
6. FindBugs
7. TagList

-->
<reporting>
<plugins>
<plugin>
<artifactId>maven-javadoc-plugin</artifactId>
<configuration>
<reportOutputDirectory>${site-deploy-location}</reportOutputDirectory>
<destDir>${project.name}</destDir>
<aggregate>true</aggregate>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jxr-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>surefire-report-maven-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-clover-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jdepend-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>1.0.0</version>
<configuration>
<threshold>Normal</threshold>
<effort>Default</effort>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>taglist-maven-plugin</artifactId>
<configuration>
<tags>
<tag>TODO</tag>
<tag>FIXME</tag>
<tag>@todo</tag>
<tag>@deprecated</tag>
</tags>
</configuration>
</plugin>
</plugins>
</reporting>

</project>

Now that this is taken care of let's look into .project and .classpath quickly. The .project basically specifies build Commands. We will use eclipse's javaBuilder and maven2Builder. Look below.

<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>parent-pom</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.maven.ide.eclipse.maven2Builder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>org.maven.ide.eclipse.maven2Nature</nature>
</natures>
</projectDescription>


.classpath is where you'd define where you want the built packages to reside, your maven repository location, what your source files are and the path to them are. Here's one from the WAR module.

<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" output="target/classes" path="src/main/java"/>
<classpathentry kind="src" output="target/classes" path="src/main/resources"/>
<classpathentry kind="src" output="target/test-classes" path="src/test/java"/>
<classpathentry kind="src" output="target/test-classes" path="src/test/resources"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/J2SE-1.5"/>
<classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.web.container"/>
<classpathentry exported="true" kind="con" path="org.eclipse.jst.j2ee.internal.module.container"/>
<classpathentry kind="con" path="org.eclipse.jst.server.core.container/com.ibm.ws.ast.st.runtime.runtimeTarget.v61/was.base.v61"/>
<classpathentry kind="con" path="org.maven.ide.eclipse.MAVEN2_CLASSPATH_CONTAINER"/>
<classpathentry kind="output" path="target/classes"/>
</classpath>


At this stage you have setup the overall project structure, configured your libraries, build path and module dependencies. You are now ready to build individual modules starting with your JAR, then WAR and finally adding them to EAR. You basically add JAR to WAR and only add WAR to EAR. You don't want cycilc dependency. I will show this in part II.

Saturday, May 28, 2011

Correct UML diagram

If you take a UML book and as you go on reading, you will get to a point where the author says something like .. "Hey, it is upto you to decide what to draw in order to communicate your solution with the concerned parties." Often in practice however, the concerned parties in our equation are the ones requesting for a specific diagram. This may not sound like a problem but on on one such instance a fellow team member was asked to present a class diagram for a batch processing application when in fact a combination of class diagram and either a sequence diagram or an activity diagram with less emphasis on the class diagram would present a clearer window to visualizing the solution. Of course, it might have been just a figure of speech.. "I need a class diagram.. meaning some sort of diagram.. for me to understand what you've been upto."

"High" Level Estimation

Why do people ask for high level estimation when the feedback for this actually becomes the not-so-tentative or not-a-ball-park estimation?
(Rhetorical question)

My take on another layer of indirection

According to this wiki post, David Wheeler pretty much summed it up..err..at least as far as our 70% (just a wild guess) of the effort go as we make a living as programmers and solution architects...when he said "All problems in computer science is solved by another layer of indirection except for the problem of too many layers of indirection".

Here's my take on this:

First of all "I Agree!". Absolutely! We thrive or at least try to on re-using what is already out there. Every new version of a module should at least try and get most out of what is already there, unless the previous version was a complete disaster and a re-write is absolutely necessary (converting vb.net code to java). Re-writing from scratch is so very tempting because you don't have to read what is writtern by others. Sometimes it just seems so logical to use the new framework. Only if time was not a factor. To re-write or not to re-write? Mostly, not to. Now back to re-using. Almost always, most of the code in the last version do not meet the means to solve the new problem. So what do we do (especially when the prevoius is from withn a third-party library)? Add another layer of inderction.. a proxy object that connects the old interface to the new interface. And if that does not solve everything add another layer. It does introduce complexity through explosion of objects.. but isn't that what interface based programming is all about?

How To: Export CSV in SQL Server 2008

On one of my earlier posts I explained how to send attachment in SQL Server 2008. The attachment in my case would be a csv exported from the execution of a stored procedure. I used bcp utility command from within SSMS. Since there is "bulk insert" command in SQL server but no bulk export command, I used the bcp utility which has been around ever since the early days of SQL server. So in my case, the query would look like:

sp_configure 'xp_cmdshell', '1'
RECONFIGURE
GO

declare @sql varchar(8000)

select @sql = 'bcp "set fmtonly off exec MyDB..sp_getAdminAllUserStats ''2009-12-01'',''2009-12-31''" queryout c:\Reports\report.csv -c -t, -T'
exec master..xp_cmdshell @sql

In order to use xp_cmdshell stored procedure, I would first need to enable it. The syntax for bcp is explained in detail in http://msdn.microsoft.com/en-us/library/ms188365.aspx. In case you are wondering the use of "set fmtonly off" , without using this you will get a " [Microsoft][ODBC SQL Server Driver]Function sequence error". The "queryout" is used since I am using a query. "-c" is used to specify character type and it is faster than using -n (native type). "-t," is where you transform your export as csv because each column's field terminator will be a comma. To use the current trusted authentication -T is used. Finally, the last statement of the stored procedure needs to be a select statement.

 

That's it.

How to: Send email with attachment from SQL Server 2008 Enterprise Edition

Some time back, I had a problem with exporting huge amount of data as csv to view in excel from a production server. It took several minutes when the server load was normal and far worse when it experienced peak traffic. My client asked me if I could take this feature off of the live server and then automate this process so that he would receive the exported data in the mail instead of requesting data from me. This meant three things, creating a sql job that executed once a week which executed an export script using 'bcp' feature to a file which would then be sent as an attachment to the client. I will explain how to configure SMPT mail server and send email in SQL Server 2008 Enterprise Edition as a two part series.

This is the second of the two part series and here I would like to show how to send email with attachment from SQL Server 2008 Enterprise Edition. If you'd like to follow the first part, here is the link http://dreamfusions.blogspot.com/2010/02/how-to-configure-smtp-mail-server-in.html.

  • Open SQL Server Management Studio (SSMS) and login either using your Windows Authentication or user credentials.

  • Once there, if you don't already see the "Object Explorer" hit F8 to open it.

  • Expand the "Management" folder and right click on "Database Mail".

  • Select "Configure Database Mail".

  • You will need to first create a new profile. To do this, select the first radio option that reads "Set up Database Mail by performing the following tasks."

  • Give a Profile Name and a short description. The Profile Name is important to send emails.

  • Then click on "Add" button to add SMTP server account you configured in part I of this series.

  • Fill out the necessary items. Leave the SMPT port to 25. Enter 127.0.0.1 as your server name. Also if you have Windows Authentication, select that or enter the login you used earlier.

  • Now you are done with profile and mail server account.

  • You can now test by right clicking on Database Mail and clicking on Send Test email.

  • To verify use:SELECT * FROM sysmail_sentitems --to view sent items


SELECT * FROM sysmail_faileditems --to view failed items
SELECT * FROM sysmail_log --to view the reason why your mail was not sent among other things.

  • Now to manually send email (this is our goal), you need to first reconfigure the Database Mail to enable it. To do this run the following scripts.


sp_CONFIGURE 'show advanced', 1
GO
RECONFIGURE
sp_CONFIGURE 'Database Mail XPs' 1
GO
RECONFIGURE


  • You are now ready to send email manually!! The sample script sends email with attachment. You use the msdb database and the profile you created should be entered.


[USE msdb]
GO
EXEC sp_send_dbmail
@profile_name='myMailProfile',
@recipients='tej.rana@hotmail.com',
@subject='Sending message from SQL Server 2008',
@body='You have received mail from SQL Server',
@file_attachments ='c:\Reports\report.csv'


That's all there is to it.

How to: Configure SMTP mail server in Windows Server 2008 and IIS 6.0

Some time back, I had a problem with exporting huge amount of data as csv to view in excel from a production server. It took several minutes when the server load was normal and far worse when it experienced peak traffic. My client asked me if I could take this feature off of the live server and then automate this process so that he would receive the exported data in the mail instead of requesting data from me. This meant three things, creating a sql job that executed once a week which executed an export script using 'bcp' feature to a file which would then be sent as an attachment to the client. I will explain how to configure SMPT mail server and send email as a two part series.

This is the first of the two part series where I would like to show how to configure SMTP mail server in Windows Server 2008.

  • From the Start Menu, navigate to "Administrative Tools" and select "Server Manager".

  • From the "Features Summary" click on "Add Features".

  • Select "SMTP Server" and click on Install. Accept all changes.

  • Now from "Administrative Toos" , select "Internet Information Services (IIS) 6.0 Manager".

  • Right click on "SMTP Virtual Severs" and click on properties.

  • Navigate to "Access" tab and click on "Relay" button.

  • Leave the "Only the list below" radio button clicked and click on "Add" button.

  • Leave the "Single computer" option selecte and enter 127.0.0.1 as your IP address.

  • Now click apply and you are almost done.

  • Right click on "SMTP Virtual Severs" and click on start.


This is it. Now you have SMPT server configured and running!! Follow the next part in this two part series to send mail via SQL Server 2008 Enterprise Edition.