We are running Spark 2.5.8 on a Solaris 10 zone, Openfire 3.6.4, Java 1.5.0_28-b04. # of user's is approx. 1500
About once a month or so, tickets come through for users unable to connect to EIM Spark. Current connections are still active but new ones are not. After killing the spark.exe process on our PC, Spark still does not open. Openfire goes into "maintence" mode. Disabling/reenabling the service does not fix, although we have not yet tried killing the java processes. Our workarond is to reboot the zone. i saw a reference to the error below suggesting that the cache.username2roster setting be increased. Its currently not listed as a System Property. Currently the Roster size listed under cache summary has a max value of .25mb, and its current size is listed as .23mb.
========== warn.log
2014.02.10 09:37:26 Cache: VCard -- object with key p67283 is too large to fit in cache. Size is 1048849
2014.02.10 09:37:36 Cache: VCard -- object with key p67283 is too large to fit in cache. Size is 1048849
2014.02.10 09:37:47 Unexpected packet tag (not message,iq,presence)<bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"><resou
rce>Spark 2.6.2</resource></bind>
2014.02.10 09:38:02 Cache: VCard -- object with key p67283 is too large to fit in cache. Size is 1048849
2014.02.10 09:38:13 Cache: VCard -- object with key p67109 is too large to fit in cache. Size is 1052874
I'm considering raising the cache roster to 1gig, but was hoping to find if anyone has a more accurate remedy for the problem. The Java memory, a day or so before our last crash on Feb 10th, the memory was between 78% and 95%. Currently its at 14%. Are there some clean up of old or logged-off sessions that don't go away and continue to hog resources?
any help would be appreciated.