Glassfish 3 connection pool exhausted before maximum pool size reached


pete@...
 

I have posted this at StackOverflow and the Liferay forum with no responses thus far:

Here is the stack I am working with: Liferay 6.0.6, Glassfish 3.0.1, JTDS driver (I think version 1.2.5), Microsoft SQL Server 2012

 

In Glassfish, I have the JDBC connection pool settings at 64 initial and 2048 Maximum (originally the maximum was 256).

 

Over time, 3-4 days, the website becomes unresponsive and when I use Visual VM to view the number of threads I can see a spike from the normal 180 threads up to 1300 threads. In the server log I see an SQL error:

 

java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.

 

Yesterday there were approximately 450 threads executing with the error above in the server log, yet I am configured for 2048 connections in the pool. Am I misunderstanding how the pooling works? When I asked our DB Admin to see how many connection were currently showing at the MSSQL end, he said there were about 450 connections so the number of DB connections and the number of threads I see in VisualVM seem to reconcile, yet my expectation is that I wouldn't see a "Cannot allocate more connections" error until the threads/connections reached 2048, not 450 (actually I think the SQL errors start at a much lower number).

 

So, I am either misunderstanding what "Maximum Pool Size" means or there is a relationship between threads and connections that I don't understand (I assume it is 1 to 1) OR there is some other limitation somewhere in the stack that is causing the error.

 

As I understand it, Liferay uses Hibernate but from what I could glean from the Hibernate documentation is that it will use the JDBC driver to handle the connections and pooling. My JNDI Connection in domain.xml looks like this:

 

    <jdbc-resource pool-name="Central_data" description="Central Connection" jndi-name="jdbc/Central_data"></jdbc-resource>

    <jdbc-connection-pool datasource-classname="net.sourceforge.jtds.jdbcx.JtdsDataSource" res-type="javax.sql.ConnectionPoolDataSource" name="Central_data">

    <property name="DatabaseName" value="Central"></property>

    <property name="Description" value="Database RS"></property>

    <property name="Password" value="XXXXXXX"></property>

    <property name="PrepareSql" value="3"></property>

    <property name="SendStringParametersAsUnicode" value="true"></property>

    <property name="ServerName" value="XXX.XXX.XXX.XXX"></property>

    <property name="PortNumber" value="1433"></property>

    <property name="Ssl" value="request"></property>

    <property name="UseCursors" value="true"></property>

    <property name="User" value="XXXXXXXXX"></property>

    </jdbc-connection-pool>

 

This has been going on for a few months and although I think 2048 is too high a number for pool size, every indication I see in server.log seems to indicate the connection pool is exhausted at a much lower number. I have tried many different "Maximums", Idle timeout values (0 - 300[default]) and "Max wait time values (0 - 60000[default]) and it really has made no difference.

 

What should I look for to find the reason for the connections being at max long before the pool size is reached?