Tuesday, May 31, 2011

SQL Profiler and Database Tuning Advisor and optimizing the db server

About a year and half ago, I'd done some work on tuning my production database. The db was SQL server 2005 but what I will write below should work for SQL server 2k8 as well.

My notes on using SQL Profiler and Database tuning advisor(err..tips if you will):

  • Common columns to use are TextData, Duration, CPU, Reads, Writes, ApplicationName, StartTime and EndTime.

  • Do not trace to table. If you want in a table, import it.

  • Right-click on column to apply filter starting with that column.

  • Not all events within a group are important.

  • EventClass and SPID columns cannot be unselected. EventClass cannot be selected either.

  • Do not use on the PC where the database resides. Use Profiler from a different PC.

  • If your sever is busy do not check server processes trace data.

  • Turn Auto Scroll off to monitor a previous event without being scrolled to the bottom.

  • Bookmarking is useful to identify which even to look at a later time.

  • In order to minimize the load on the SQL server, reduce the number of events traced/captured.

  • The same goes with data columns.

  • Useful events to track slow running stored procedures are RPC:Completed, SP:StmtCompleted, SQL:BatchStarting, SQL:BatchCompleted and ShowPlan XML.

  • Useful data columns are Duration, ObjectName, TextData, CPU, Reads, Writes, IntegerData, DatabaseName, ApplicationName, StartTime, EndTime, SPID, LoginName, EventSequence, BinaryData.

  • Testing for which queries run frequently and storing that to trace table. This should be found out from the production server.
    SELECT [ObjectName], COUNT(*) AS [SP Count]
    FROM [dbo].[Identify_query_counts]
    WHERE [Duration] > 100
    AND [ObjectName] IS NOT NULL
    GROUP BY [ObjectName]
    ORDER BY [SP Count] DESC

  • Testing for deadlocks use events like Deadlock graph, Lock: Deadlock, Lock: Deadlock Chain, RPC: Completed, SP: StmtCompleted, SQL: BatchCompleted, SQL: BatchStarting.

  • Useful data columns are TextData, EventSequence, DatabaseName.

  • Testing for blocking issues use event BlockedProcessReport but also use this:
    SP_CONFIGURE 'show advanced options', 1 ;
    GO
    RECONFIGURE ;
    GO
    SP_CONFIGURE 'blocked process threshold', 10 ;
    GO
    RECONFIGURE ;
    GO//do this to turn it off
    SP_CONFIGURE 'blocked process threshold', 0 ;
    GO
    RECONFIGURE ;
    GO

  • Useful Data Columns are Events, TextData, Duration, IndexID, Mode, DatabaseID, EndTime

  • For production environment set the value of threshold to 1800 (30 mins) and be sure to turn off.

  • Testing for excessive index/table scans, use events like Scam:Started along with RPC:Completed, SP:StmtCompleted, SQL:BatchStarting, SQL:BatchCompleted and Showplan XML.

  • Useful Data columns are ObjectID, ObjectName, Duration, EventCall, TextData, CPU, Reads, Writes, IntegerData, StartTime, EndTime, EventSequence and BinaryData.

  • DTA: Provide representative workload in order to receive optimal recommendations.

  • Only RPC:Completed, SP:StmtCompleted and SQL:BatchCompleted.

  • Data Columns used are TextData, Duration, SPID, DatabaseName and LoginName.

  • Check Server Processes Trace Data to capture all trace events.

  • Run traces quarterly or monthly to feed to DTA to ensure indexes are up to date.

  • Create baseline traces to compare traces after indexing to check which queries run most often and their average duration.

  • Run only one trace at a time.

  • Do not run Profiler when running DB backup.

  • Set the Trace Off time when you run trace.

  • Run DTA at low traffic times.

  • Reduce the use of cursors in the application. Those that are in jobs or not in use can be ignored if they execute on time and on those hours where the application is least accessed.

  • Index created without the knowledge of queries serve little purpose.

  • Sort the trace (tuning) by CPU Reads. This gives the costly query.


Basically there are different parameters to look at. Firstly, I optimized those queries that are most frequently used by creating indexes and where possible re-writing them looking at query execution path. Then since I know these indexes are going to be fragmented when data gets updated or deleted from the tables in question, I setup a defragmentation plan as a job in the db server. Those indices that had fragmentation between 0 and 20 were left untouched, between 20 and 40 were re-organized and those above 40 were re-built.

Secondly, I also examined any queries or stored procedures that were hogging CPU, meaning not responding and causing other queries to wait for it to complete. There was one that I found that was not written very well. So I re-wrote it.

After that, I checked other server parameter to see if the server actually meets the standard. Such parameters are 'Memory -> pages/sec' and 'Memory->Available bytes'.  We had 32-bit processor so we couldn't upgrade the RAM only. We had to upgrade it to 64-bit server with initially 8gb RAM enabling 3GB of process space. The reason for upgrade to only 8gb was of course we want to see gradual performance improvement.

Then, I adjusted connection pooling parameters of my application server (Jboss 4.0.3 sp1) by doing a lot of load testing. I think I should have an article on that later. It was setup with Apache forwarding all the non-static (images and html) requests to Jboss. I won't dwell on this too much for now.

Lastly, all the developers in team focuses their attention to checking the source code to see if connections were being opened and closed properly. The application was using JDBC and this was quite a tedious task. We'd even managed to write code to flush connection after it reached a certain inactive threshold and log whenever it did that. I know most dba's would ask to do this step first, but either way our queries and db server needed optimization/upgrade.

There was an emailing app which sent newsletters to over 150k users at the time. It used to execute normally from 5-6hrs depending on the traffic on the application. That drastically dropped to less than an hour! :)

References:

Query to get the top 20 most executed queries in the database

SELECT TOP 20 SUBSTRING(qt.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.text) ELSE
qs.statement_end_offset
END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads,
qs.last_logical_reads,
qs.min_logical_reads, qs.max_logical_reads, qs.total_elapsed_time, qs.last_elapsed_time,
qs.min_elapsed_time, qs.max_elapsed_time, qs.last_execution_time, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qt.encrypted=0
ORDER BY qs.total_logical_reads DESC


Query to identify wait times

Select top 10 *
from sys.dm_os_wait_stats
ORDER BY wait_time_ms DESC


The job to set defragmentation logic

USE [xxxx]--Your Db name
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[sp_IndexDefrag]
AS
DECLARE @DBName NVARCHAR(255),
        @TableName NVARCHAR(255),
        @SchemaName NVARCHAR(255),
        @IndexName NVARCHAR(255),
        @PctFrag DECIMAL,
        @Defrag NVARCHAR(MAX)

IF EXISTS (SELECT * FROM sys.objects WHERE object_id =  object_id(N'#Frag'))
    DROP TABLE #Frag

Create table #Frag
        (DBName NVARCHAR(255),
         TableName NVARCHAR(255),
         SchemaName NVARCHAR(255),
         IndexName NVARCHAR(255),
         AvgFragment DECIMAL)

EXEC sp_msforeachdb 'INSERT INTO #Frag(DBName,
                                       TableName,
                                       SchemaName,
                                       IndexName,
                                       AvgFragment)
                     Select ''?'' As DBNAME,
                            t.Name As TableName,
                            sc.Name As SchemaName,
                            i.name As IndexName,
                            s.avg_fragmentation_in_percent
                     FROM

?.sys.dm_db_index_physical_stats(DB_ID(''?''),NULL,NULL,NULL,''Sampled'') As s
                     JOIN ?.sys.indexes i
                     ON s.Object_Id = i.Object_id
                        AND s.Index_id = i.Index_id
                     JOIN ?.sys.tables t
                     ON i.Object_id = t.Object_id
                     JOIN ?.sys.schemas sc
                     ON t.schema_id = sc.SCHEMA_ID
                     WHERE s.avg_fragmentation_in_percent > 20
                     AND t.TYPE = ''U''
                     AND s.page_count > 8                    
                     ORDER BY TableName, IndexName'

                     DECLARE cList CURSOR FOR
                     SELECT * FROM #Frag
                     where DBName = 'XXXX' --your db

                     OPEN cList
                     FETCH NEXT FROM cList
                     INTO @DBName, @TableName, @SchemaName, @IndexName, @PctFrag

                     WHILE @@FETCH_STATUS = 0
                     BEGIN
                          IF @PctFrag BETWEEN 20.0 AND 40.0
                          BEGIN
                               SET @Defrag = N'ALTER INDEX ' + @IndexName + ' ON ' +

@DBName + '.' + @SchemaName + '.' + @TableName + ' REORGANIZE'
                               EXEC sp_executesql @Defrag
                               PRINT 'Reorganize index: ' + @DBName + '.' + @SchemaName +

'.' + @TableName + '.' + @IndexName
                          END
                          ELSE IF @PctFrag > 40.0
                          BEGIN
                               SET @Defrag = N'ALTER INDEX ' + @IndexName + ' ON ' +

@DBName + '.' + @SchemaName + '.' + @TableName + ' REBUILD'
                               EXEC sp_executesql @Defrag
                               PRINT 'Rebuild index: ' + @DBName + '.' + @SchemaName + '.'

+ @TableName + '.' + @IndexName
                          END

                          FETCH NEXT FROM cList
                          INTO @DBName, @TableName, @SchemaName, @IndexName, @PctFrag
                    END
                    CLOSE cList
                    DEALLOCATE cList

                    DROP TABLE #Frag


So that's it. ';) I know this is very long for a post, but trust me, your work takes days if not weeks. And optimization is an on-going process. You cannot sit back and relax once you do it the first time. 

 

Setting up Maven Enterprise Application Part II

Part II: Setting up individual modules of an EAR






If you've been following along, this is part 2 of two part series in setting up Maven Enterprise Application. Here is part I which shows you how to use maven to setup the overall project structure, configure build path, add module dependencies and libraries. In this second part, I will show you how to structure the individual modules, the JAR and WAR. Since I've assumed that you're using Eclipse/RAD or any other IDE for that matter, I've already discussed how to add WAR to the EAR in part I. From the IDE this is basically adding WAR module dependency to EAR.

Overall View
Before I begin, please see the image below from RAD to see the overall structure. For some of you, this should be enough. You can get this view from Enterprise Explorer in RAD or Project Explorer in Eclipse. Some of you may prefer to use Navigator window. But I am more comfortable with the former two.

[caption id="attachment_53" align="alignnone" width="211" caption="Module Structure"]module_structure[/caption]

JAR Module

The JAR Module will, as I mentioned in Part I, house your model and unit tests for the models. Your model will consist of domains, daos which are your db technology dependent interface and services which are your business interfaces and not underlying db specific. Additionally, you may have custom exception or helper classes.

Your source folder will be under src/main/java. Any resources like perhaps an applicationContext file will be added to src/main/resources. In part I, I mentioned that these locations are added to the classpath. So to read any resource from there you'd write something like classpath*:applicationContext.xml.
Your unit tests (unit tests for JAR module and integration tests for WAR module) will be housed under src/test/java and any resource like applicationContext file will be housed under src/test/resources.

Basically, if you use an ORM tool your domains or entities define relationships and constraints. These domains are package under com.yourComanpyName.applicationName.domain package. You don't write tests for domain objects. Your dao interfaces are packaged under com.yourComanpyName.applicationName.dao and the implementations are packaged under com.yourComanpyName.applicationName.dao.impl. If you're using spring for dependency injection, you'd annotate the implentations as @Repository or define a bean in the application_context.xml file. You would then be able to Autowire your daos to your service and service unit tests. Now daos will have to be unit tested. Therefore you'd create a new package for the tests as com.yourComanpyName.applicationName.dao.impl (same package) under src/test/java. I am not going into details of writing unit tests and daos or any code in this two part series. I will have articles on them later. Now your service methods will be structured similarly to your daos under com.yourComanpyName.applicationName.service and com.yourComanpyName.applicationName.service.impl. You'd want to use DI to initialize them via annotation or xml the same way you'd do for daos because these service methods will be called from WAR module. Your tests for service will go under com.yourComanpyName.applicationName.service.impl under src/test/java. For unit testing you'd want to use a mocking tool like EasyMock or Mockito. You want to mock out the service tests because you do not want to tie up your service to a specific dao implementation. For example, tomorrow you might change from hibernate to using iBatis or JDBC.

So this is basically it as far as your JAR module goes.

WAR Module

The war module will follow the same naming convention as the JAR module. Your sources will be housed under src/main/java and the resources (not web.xml) will be housed under src/main/resources. You can have a folder called WebContent (exists by default in RAD's dynamic application) under which you will have your WEB-INF folder for your web descriptor file (web.xml), static folder for your pages, scripts, css and images. Your tests for source files will be under src/main/test as usual.

Suppose you're using Struts 2, you would then have the action (controllers in Struts 2) classes under com.yourComanpyName.applicationName.web.action package. This is sticking to the naming convention. Your actions will call your injected service interfaces from the JAR module. Your applicationContext file will almost always be the same as what you defined in the JAR module.

Now you'd want to do integration tests in WAR module. The package to create would be the same as for your Action classes (or controllers) but under src/main/test. For integration testing, you'd use a tool to mock out the HttpServletRequest and Session. You can also mock out the db as you did in the service unit tests in the JAR module. That way you are not dependent on dao implementation.

The only thing remaining is to add JAR module as dependency to the WAR module. This I explained in part I. But it is pretty straightforward from the IDE.

You are now all set to start working on your EAR project.

Final Note: I have not covered unit testing or integration testing in any detail for you to be able to go ahead and start writing them. The purpose was only to show you how to setup an EAR using maven. I will have articles related to them later. But the good part is that I won't have to explain the folder structures then. I can always refer to these two articles :)

 

Monday, May 30, 2011

Setting Up Maven Enterprise Application Part I

Part I: Setting up maven and overall structure






This will be the first of 2 part series where I would like to show how to quickly setup a Java Enterprise Application. In this part you will setup the overall project structure, configure your libraries, build path and module dependencies. I use RAD/Eclipse with m2Eclipse plugin which you can find here for development but the configuration is independent of what IDE or text editor you choose to use. So let's get started.

Pre-requisites:

  • Maven 2. I used maven version 2.0.9. For a higher version of maven, please refer to maven documentation.

  • Eclipse/RAD. Although you can do this without an IDE, it will nonetheless be easier to configure in an IDE since this is an Enterprise Application.

  • JDK 1.4 or higher. I prefer JDK 5 or higher.


Overall structure
The enterprise application that I am going to setup will have a JAR module for services and DAOs, a WAR module that will import the JAR module and an EAR module that will include the WAR module. If you wish to add more modules, you can choose similar structure. You could have only WAR and JAR modules in eclipse, but it is preferable to house them within an EAR. Also RAD requires you to have a deployable EAR. Our WAR module will be deployed. To make this a maven project will have also have a parent-pom (which is not a module) sit above all three modules above that will package them together.

Look at the structure below to see how the structure will look like:
Overall Structure

Setting individual modules starting with parent-pom
When you look at the image above, you will see three files: pom.xml, .project, .classpath. All three are important to configure. The IDE will generate your .classpath and .project files but I will nevertheless go through all of them to ensure that you understand what the IDE is generating. Also you can the .settings folder. This folder is specifically related to parent-pom. This folder will contain additional project/IDE related configurations. But I will not go through this because I don't want to make this too long. Also, the configuration for individual modules will be similar to what you will do for parent-pom. If you understand how I am going to configure these three files, you will easily be able to configure the rest.
So let's begin with pom.xml. Pom files are read by maven in order to do the following:

  • Package modules. In this parent pom, you will package all three modules. The package name will be their folder names. In the EAR's pom, you would add the WAR module and in the WAR the JAR module. The JAR module will not have any module dependency.

  • Define versioning, application description, url, name and so on. This is self explanatory. However, you'd want version to be consistent across all modules.

  • Dependencies. You will define all the artifacts and their version that you'd use in your application. In this parent pom, you'd define all artifacts that are common to more than one module. For e.g you'd define log4j here since you'd need logging in both the JAR and WAR module. Same can be said of JUnit and Spring core libraries.

  • Developers and Contributors. Optional. List out the developers/contributors and their roles.

  • Plugin Lists. This lets you define your goals for you build process. For example, you'd use cobertura to see what percentage of your source code is unit tested.

  • Reporting List. Optional. If your need is to generate reports, you'd use this. Again, as I mentioned you'd want to get cobertura report, here's where you define cobertura-maven-plugin.

  • Repositories. A repository is where you'd pull all the artifacts from. Some private repos need authentication, but most don't. While your default settings.xml of maven is a good place to define your repositories, this is also another place to do so.


Please look at the code below for more info:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<!--
This is the 'parent' POM (Project Object Model) which will have the following
nodes inherited by any POM which declared the <parent/> node to point to this
POM. Please note: This is not the 'super POM' which is supplied by Maven itself.
The super POM has its values inherited by all POMs.

* dependencies
* developers and contributors
* plugin lists
* reports lists
* plugin executions with matching ids
* plugin configuration

@author yourName
@version 1.0
-->

<!--
The POM version.
-->
<modelVersion>4.0.0</modelVersion>

<!--
The organization that is creating the artifact. The standard naming convention
is usually the organizations domain name backwards like the package name in Java.
-->
<groupId>com.yourCompanyName.ApplicationName</groupId>

<!--
The artifact name. This will be used when generating the phsyical artifact name.
The result will be artifactId-version.type.
-->
<artifactId>parent-pom</artifactId>

<!--
The type of artifact that will be generated. In this case no real artifact is
generated by this POM, only the sub projects.
-->
<packaging>pom</packaging>

<!--
The version of the artifact to be generated.
-->
<version>0.0.1-SNAPSHOT</version>

<!--
The name of the project to be displayed on the website.
-->
<name>Your Application Name</name>

<!--
The description of the project to be displayed on the website.
-->
<description>
Description of your app
</description>

<!--
The url of the project to be displayed on the website.
-->
<url>http://www.WARModuleURL.com</url>
<!--
This project is an aggregation/multi-module which includes the following
projects. Please note: the value between the module node is the folder
name of the module and not the artifactId value.
-->
<modules>
<module>AppNameJAR</module>
<module>AppNameWAR</module>
<module>AppNameEAR</module>
</modules>

<!--
This segement list the inherited dependencies for each child POM.
-->
<dependencies>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>1.8.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.4</version>
<scope>test</scope>
</dependency>
......
</dependencies>

<!--
The following node defines the developers that are working on the project,
their roles and contact information. This will be used when the site is
generated for the project. (mvn site).

* id - The id of the developer.
* name - The display name that will be used for the display name under 'Project Team' of the website.
* email - The e-mail address of the team member which will be displayed.
* roles - A list of roles the member fulfills.
* organization - The organization of the developer.
* timezone - The timezone of the developer.
-->
<developers>
<developer>
<id>12344</id>
<name>Your Name</name>
<email>yourEamil</email>
<organization>
ABC company INC
</organization>
<organizationUrl>http://www.ABC_Comapnycom</organizationUrl>
<roles>
<role>Technical Leader</role>
</roles>
<timezone>+5:45</timezone>
</developer>
</developers>

<!--
The following node defines the contributors that are working on the project,
their roles and contact information. This will be used when the site is
generated for the project. (mvn site).

* name - The display name that will be used for the display name under 'Project Team' of the website.
* email - The e-mail address of the team member which will be displayed.
* roles - A list of roles the member fulfills.
* organization - The organization of the developer.
* timezone - The timezone of the developer.
-->
<contributors>
<contributor>
<name>SomeName</name>
<email>SomeEmail</email>
<organization>
ABC company INC
</organization>
<organizationUrl>http://www.ABC_Comapnycom</organizationUrl>
<roles>
<role>Engineering Manager</role>
</roles>
<timezone>+5:45</timezone>
</contributor>
...
</contributors>
<!--
Each POM file is a configuration file for the build process. There are many plug-ins
for adding new steps in the build process and controlling which JDK is being used.
Below we customize the version of the JDK as well as some code inspection tools like:

1. Cobertura
-->
<build>
<plugins>
<!--
Configure the maven-compiler-plugin to use JDK 1.5
-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
<fork>true</fork>
</configuration>
</plugin>
<!--
Configure Cobertura to ignore monitoring the apache log4j
class.
-->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.0</version>
<configuration>
<instrumentation>
<ignores>
<ignore>org.apache.log4j.*</ignore>
</ignores>
</instrumentation>
</configuration>

<!--
The following controls under which goals should this
plug-in be executed.
-->
<executions>
<execution>
<goals>
<goal>clean</goal>
<goal>cobertura</goal>
</goals>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>2.0.1</version>
<configuration>
<findbugsXmlOutput>true</findbugsXmlOutput>
<includeTests>false</includeTests>
<skip>true</skip>
</configuration>

</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptors>
<descriptor>assembly.xml</descriptor>
</descriptors>
</configuration>
</plugin>

</plugins>
</build>

<!--
Maven can look at various repositories to locate dependencies that
need to be downloaded and placed into the local repository. In the
below configuration, we enable codehaus, apache, and opensymphony
repositories.
-->
<repositories>
<repository>
<id>snapshots-maven-codehaus</id>
<name>snapshots-maven-codehaus</name>
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
<url>http://snapshots.maven.codehaus.org/maven2</url>
</repository>
<repository>
<id>Maven Snapshots</id>
<url>http://snapshots.maven.codehaus.org/maven2/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</repository>
<repository>
<id>spring-s3</id>
<name>Spring Portfolio Maven MILESTONE Repository</name>
<url>
http://s3.amazonaws.com/maven.springframework.org/milestone
</url>
</repository>
...
</repositories>

<!--
For the reporting area of the website generated.

1. JavaDoc's
2. SureFire
3. Clover
4. Cobertura
5. JDepend
6. FindBugs
7. TagList

-->
<reporting>
<plugins>
<plugin>
<artifactId>maven-javadoc-plugin</artifactId>
<configuration>
<reportOutputDirectory>${site-deploy-location}</reportOutputDirectory>
<destDir>${project.name}</destDir>
<aggregate>true</aggregate>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jxr-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>surefire-report-maven-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-clover-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jdepend-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>1.0.0</version>
<configuration>
<threshold>Normal</threshold>
<effort>Default</effort>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>taglist-maven-plugin</artifactId>
<configuration>
<tags>
<tag>TODO</tag>
<tag>FIXME</tag>
<tag>@todo</tag>
<tag>@deprecated</tag>
</tags>
</configuration>
</plugin>
</plugins>
</reporting>

</project>

Now that this is taken care of let's look into .project and .classpath quickly. The .project basically specifies build Commands. We will use eclipse's javaBuilder and maven2Builder. Look below.

<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>parent-pom</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.maven.ide.eclipse.maven2Builder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>org.maven.ide.eclipse.maven2Nature</nature>
</natures>
</projectDescription>


.classpath is where you'd define where you want the built packages to reside, your maven repository location, what your source files are and the path to them are. Here's one from the WAR module.

<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" output="target/classes" path="src/main/java"/>
<classpathentry kind="src" output="target/classes" path="src/main/resources"/>
<classpathentry kind="src" output="target/test-classes" path="src/test/java"/>
<classpathentry kind="src" output="target/test-classes" path="src/test/resources"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/J2SE-1.5"/>
<classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.web.container"/>
<classpathentry exported="true" kind="con" path="org.eclipse.jst.j2ee.internal.module.container"/>
<classpathentry kind="con" path="org.eclipse.jst.server.core.container/com.ibm.ws.ast.st.runtime.runtimeTarget.v61/was.base.v61"/>
<classpathentry kind="con" path="org.maven.ide.eclipse.MAVEN2_CLASSPATH_CONTAINER"/>
<classpathentry kind="output" path="target/classes"/>
</classpath>


At this stage you have setup the overall project structure, configured your libraries, build path and module dependencies. You are now ready to build individual modules starting with your JAR, then WAR and finally adding them to EAR. You basically add JAR to WAR and only add WAR to EAR. You don't want cycilc dependency. I will show this in part II.

Saturday, May 28, 2011

Correct UML diagram

If you take a UML book and as you go on reading, you will get to a point where the author says something like .. "Hey, it is upto you to decide what to draw in order to communicate your solution with the concerned parties." Often in practice however, the concerned parties in our equation are the ones requesting for a specific diagram. This may not sound like a problem but on on one such instance a fellow team member was asked to present a class diagram for a batch processing application when in fact a combination of class diagram and either a sequence diagram or an activity diagram with less emphasis on the class diagram would present a clearer window to visualizing the solution. Of course, it might have been just a figure of speech.. "I need a class diagram.. meaning some sort of diagram.. for me to understand what you've been upto."

"High" Level Estimation

Why do people ask for high level estimation when the feedback for this actually becomes the not-so-tentative or not-a-ball-park estimation?
(Rhetorical question)

My take on another layer of indirection

According to this wiki post, David Wheeler pretty much summed it up..err..at least as far as our 70% (just a wild guess) of the effort go as we make a living as programmers and solution architects...when he said "All problems in computer science is solved by another layer of indirection except for the problem of too many layers of indirection".

Here's my take on this:

First of all "I Agree!". Absolutely! We thrive or at least try to on re-using what is already out there. Every new version of a module should at least try and get most out of what is already there, unless the previous version was a complete disaster and a re-write is absolutely necessary (converting vb.net code to java). Re-writing from scratch is so very tempting because you don't have to read what is writtern by others. Sometimes it just seems so logical to use the new framework. Only if time was not a factor. To re-write or not to re-write? Mostly, not to. Now back to re-using. Almost always, most of the code in the last version do not meet the means to solve the new problem. So what do we do (especially when the prevoius is from withn a third-party library)? Add another layer of inderction.. a proxy object that connects the old interface to the new interface. And if that does not solve everything add another layer. It does introduce complexity through explosion of objects.. but isn't that what interface based programming is all about?

How To: Export CSV in SQL Server 2008

On one of my earlier posts I explained how to send attachment in SQL Server 2008. The attachment in my case would be a csv exported from the execution of a stored procedure. I used bcp utility command from within SSMS. Since there is "bulk insert" command in SQL server but no bulk export command, I used the bcp utility which has been around ever since the early days of SQL server. So in my case, the query would look like:

sp_configure 'xp_cmdshell', '1'
RECONFIGURE
GO

declare @sql varchar(8000)

select @sql = 'bcp "set fmtonly off exec MyDB..sp_getAdminAllUserStats ''2009-12-01'',''2009-12-31''" queryout c:\Reports\report.csv -c -t, -T'
exec master..xp_cmdshell @sql

In order to use xp_cmdshell stored procedure, I would first need to enable it. The syntax for bcp is explained in detail in http://msdn.microsoft.com/en-us/library/ms188365.aspx. In case you are wondering the use of "set fmtonly off" , without using this you will get a " [Microsoft][ODBC SQL Server Driver]Function sequence error". The "queryout" is used since I am using a query. "-c" is used to specify character type and it is faster than using -n (native type). "-t," is where you transform your export as csv because each column's field terminator will be a comma. To use the current trusted authentication -T is used. Finally, the last statement of the stored procedure needs to be a select statement.

 

That's it.

How to: Send email with attachment from SQL Server 2008 Enterprise Edition

Some time back, I had a problem with exporting huge amount of data as csv to view in excel from a production server. It took several minutes when the server load was normal and far worse when it experienced peak traffic. My client asked me if I could take this feature off of the live server and then automate this process so that he would receive the exported data in the mail instead of requesting data from me. This meant three things, creating a sql job that executed once a week which executed an export script using 'bcp' feature to a file which would then be sent as an attachment to the client. I will explain how to configure SMPT mail server and send email in SQL Server 2008 Enterprise Edition as a two part series.

This is the second of the two part series and here I would like to show how to send email with attachment from SQL Server 2008 Enterprise Edition. If you'd like to follow the first part, here is the link http://dreamfusions.blogspot.com/2010/02/how-to-configure-smtp-mail-server-in.html.

  • Open SQL Server Management Studio (SSMS) and login either using your Windows Authentication or user credentials.

  • Once there, if you don't already see the "Object Explorer" hit F8 to open it.

  • Expand the "Management" folder and right click on "Database Mail".

  • Select "Configure Database Mail".

  • You will need to first create a new profile. To do this, select the first radio option that reads "Set up Database Mail by performing the following tasks."

  • Give a Profile Name and a short description. The Profile Name is important to send emails.

  • Then click on "Add" button to add SMTP server account you configured in part I of this series.

  • Fill out the necessary items. Leave the SMPT port to 25. Enter 127.0.0.1 as your server name. Also if you have Windows Authentication, select that or enter the login you used earlier.

  • Now you are done with profile and mail server account.

  • You can now test by right clicking on Database Mail and clicking on Send Test email.

  • To verify use:SELECT * FROM sysmail_sentitems --to view sent items


SELECT * FROM sysmail_faileditems --to view failed items
SELECT * FROM sysmail_log --to view the reason why your mail was not sent among other things.

  • Now to manually send email (this is our goal), you need to first reconfigure the Database Mail to enable it. To do this run the following scripts.


sp_CONFIGURE 'show advanced', 1
GO
RECONFIGURE
sp_CONFIGURE 'Database Mail XPs' 1
GO
RECONFIGURE


  • You are now ready to send email manually!! The sample script sends email with attachment. You use the msdb database and the profile you created should be entered.


[USE msdb]
GO
EXEC sp_send_dbmail
@profile_name='myMailProfile',
@recipients='tej.rana@hotmail.com',
@subject='Sending message from SQL Server 2008',
@body='You have received mail from SQL Server',
@file_attachments ='c:\Reports\report.csv'


That's all there is to it.

How to: Configure SMTP mail server in Windows Server 2008 and IIS 6.0

Some time back, I had a problem with exporting huge amount of data as csv to view in excel from a production server. It took several minutes when the server load was normal and far worse when it experienced peak traffic. My client asked me if I could take this feature off of the live server and then automate this process so that he would receive the exported data in the mail instead of requesting data from me. This meant three things, creating a sql job that executed once a week which executed an export script using 'bcp' feature to a file which would then be sent as an attachment to the client. I will explain how to configure SMPT mail server and send email as a two part series.

This is the first of the two part series where I would like to show how to configure SMTP mail server in Windows Server 2008.

  • From the Start Menu, navigate to "Administrative Tools" and select "Server Manager".

  • From the "Features Summary" click on "Add Features".

  • Select "SMTP Server" and click on Install. Accept all changes.

  • Now from "Administrative Toos" , select "Internet Information Services (IIS) 6.0 Manager".

  • Right click on "SMTP Virtual Severs" and click on properties.

  • Navigate to "Access" tab and click on "Relay" button.

  • Leave the "Only the list below" radio button clicked and click on "Add" button.

  • Leave the "Single computer" option selecte and enter 127.0.0.1 as your IP address.

  • Now click apply and you are almost done.

  • Right click on "SMTP Virtual Severs" and click on start.


This is it. Now you have SMPT server configured and running!! Follow the next part in this two part series to send mail via SQL Server 2008 Enterprise Edition.