On my day job I deal with Internet banking. The Internet bank is a relatively large and high-load Java/Spring/Hibernate web application, which uses Oracle databases.
During our recent transition from a centralized data accessor (VJDBC) to local JDBC connection pools to reduce data routrip times, we have started having issues with memory usage in our application servers: some requests started to allocate tens to hundreds of megabytes of memory. While Garbage Collector was successfully reclaiming all this memory afterwards (no memory leaks), it still posed a problem of high peak memory usage as well as too frequent collections, also affecting the overall performance.
While profiling memory allocations with JProfiler, I have discovered that OracleStatement.prepareAccessors() is responsible for these monstrous allocations (up to 600 Mb at once, most in either char or byte giant arrays). Google has pointed to this nice article on reducing the default prefetch size, describing a very similar situation, however these guys have had problems with queries returning LOBs. We haven't used any LOBs in our problematic queries and haven't modified the defaultRowPrefetch connection property knowingly.
Further investigation led to the way we were using Hibernate: for some quesries that are expected to return large result sets, we were using the Query.setFetchSize() or Criteria.setFetchSize() methods with rather high values (eg 5000). This seemed reasonable, because we were also using the setMaxResults() method with the same value to reduce the maximum length of the returned ResultSet. However, after doing some upgrades of Java, Hibernate, and Oracle driver, this had started having these memory allocation side-effects. It seems that now Hibernate translates this fetchSize parameter directly to OracleStatement's rowPrefetch value, forcing it instantly allocate a rowPrefetch*expectedRowSize sized array even before it runs the actual query. This array can be ridicuosly large, even if the actual query returns only a few rows afterwards. Later investigation showed that also having the batch-size attribute in the Hibernate mapping files (hbm.xml) has exactly the same effect and also results in giant pre-allocations.
As a result, we had to review all batch-size and setFetchSize() values that we were using with our Hibernate queries and mappings, in most cases reducing them significantly. This would reduce the worst-case performance of some long queries (they would require more roundtrips to the database), but would also reduce the overall amount of garbage accumulating in the heap and thus reduce the frequency of garbage collections, having a positive impact on CPU load. Shorter results would run equally fast, so it makes sense actually to rely on average statictics of the actual responses when chosing optimal rowPrefetch values. The default value is 10, which is hardcoded in the Oracle driver.
For longer queries, the abovementioned article has proposed an idea of geometrically increasing the rowPrefetch (setting it twice as big for each subsequent fetch manually). This is a nice idea, but I wonder why Oracle driver can't do this automatically? This is how Java collections behave when they resize themselves. I haven't tried doing this with Hibernate yet, but I think it should be possible, especially if you use the Query.scroll() instead of Query.list().
Sunday, April 26, 2009
Excessive memory usage by Oracle driver solved
Subscribe to:
Posts (Atom)