Java heap space crashing Solr query

I'm not a Java guy but use Solr for searching, but after search about this issue I couldn't find out why it is happening.

I have a 30-million-records-index with no sorting and the lightest setup I could do, but I have the following exception after a few queries:

SEVERE: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.index.SegmentReader.createFakeNorms( at org.apache.lucene.index.SegmentReader.fakeNorms( at org.apache.lucene.index.SegmentReader.norms( at at$TermWeight.scorer( at at at at at at org.apache.solr.handler.component.QueryComponent.process( at org.apache.solr.handler.component.SearchHandler.handleRequestBody( at org.apache.solr.handler.RequestHandlerBase.handleRequest( at org.apache.solr.core.SolrCore.execute( at org.apache.solr.servlet.SolrDispatchFilter.execute( at org.apache.solr.servlet.SolrDispatchFilter.doFilter( at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter( at org.apache.catalina.core.ApplicationFilterChain.doFilter( at org.apache.catalina.core.StandardWrapperValve.invoke( at org.apache.catalina.core.StandardContextValve.invoke( at org.apache.catalina.core.StandardHostValve.invoke( at org.apache.catalina.valves.ErrorReportValve.invoke( at org.apache.catalina.core.StandardEngineValve.invoke( at org.apache.catalina.connector.CoyoteAdapter.service( at org.apache.coyote.http11.Http11Processor.process( at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process( at$ at

Then I restart tomcat and it gets back to work until a few requests for falling down again.

I'm not sorting (even I wish it) and the search os most of times by specific indexed fields (not for all of them).

Could you help me? Thanks in advance :)


128 MB seems low for a Solr deployment with a few millions records. You can indeed increase the maximum size of the JVM using -Xmx. The -XX:MinHeapFreeRatio just changes the point at which the heap is resized, but you could also use -Xms with the same value as -Xmx to directly allocate the maximum size and avoid any resizes.

However, you may want to try and determine a more precise value for the heap instead of just throwing more memory blindly, as too much memory can be counter-productive latency-wise, because of the longer pauses during garbage collection. Using JVisualVM (even better, with the VisualGC plugin) or jstat on the command line, you can see how much memory Solr uses after starting, how much it uses after a request, and generally how its heap varies during your typical usage.

For example, using jstat -gcutil <PID>, you can see how full the young (E, as in Eden) and old (O) generations of the JVM are (the old generation is what you should be looking at, at first). Or using jstat -gc <PID>, you'll get the values instead of a percentage (the C columns being the capacity, i.e. the maximum, and the U columns being the actual usage). You need enough memory for Solr's working set plus what's needed to process the requests. Using that information, you can tune a bit more finely what's needed.


The code identifies the integer that is closest to 0 in an array and if there are 2 or more values that meet this condition the return value should be null.The problem is that when I make the ...

I have recently been setting up mobile apps to work with my meteor server. As a part of this I have to pass the meteor web app data from android. Unfortunately I have been receiving a error that tells ...

I would like to create 5-6 classes,I am storing values in hashmap in 1st class & I would like to call it from 4th,5th & 6th class.How to get this any snippets or example to implement this ...

I am using the Kubernetes-client java client to create Deployments on a Kubernetes cluster. THis is the code Deployment deployment = new DeploymentBuilder() .withNewMetadata() ....