-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Closed
Labels
Description
code cache is full error
We have recently updated to GraalVM 22.3.0 as our JDK, and since then have seen a couple of production alerts with a stack trace. Here is the stack trace:
Caused by: org.graalvm.polyglot.PolyglotException: java.lang.RuntimeException: jdk.vm.ci.code.BailoutException: Code installation failed: code cache is full
at jdk.vm.ci.hotspot.HotSpotCodeCacheProvider.installCode(HotSpotCodeCacheProvider.java:149)
at jdk.vm.ci.code.CodeCacheProvider.setDefaultCode(CodeCacheProvider.java:67)
at org.graalvm.compiler.truffle.compiler.hotspot.HotSpotTruffleCompilerImpl.compileAndInstallStub(HotSpotTruffleCompilerImpl.java:263)
at org.graalvm.compiler.truffle.compiler.hotspot.HotSpotTruffleCompilerImpl.installTruffleReservedOopMethod(HotSpotTruffleCompilerImpl.java:237)
at org.graalvm.compiler.truffle.compiler.hotspot.libgraal.TruffleToLibGraalEntryPoints.installTruffleReservedOopMethod(TruffleToLibGraalEntryPoints.java:319)
at org.graalvm.compiler.truffle.runtime.hotspot.libgraal.TruffleToLibGraalCalls.installTruffleReservedOopMethod(Native Method)
at org.graalvm.compiler.truffle.runtime.hotspot.libgraal.LibGraalHotSpotTruffleCompiler.installTruffleReservedOopMethod(LibGraalHotSpotTruffleCompiler.java:169)
at org.graalvm.compiler.truffle.runtime.hotspot.AbstractHotSpotTruffleRuntime.installReservedOopMethods(AbstractHotSpotTruffleRuntime.java:545)
at org.graalvm.compiler.truffle.runtime.hotspot.AbstractHotSpotTruffleRuntime.bypassedReservedOop(AbstractHotSpotTruffleRuntime.java:514)
at org.graalvm.compiler.truffle.runtime.hotspot.HotSpotFastThreadLocal.setJVMCIReservedReference(HotSpotFastThreadLocal.java:139)
at org.graalvm.compiler.truffle.runtime.hotspot.HotSpotFastThreadLocal.set(HotSpotFastThreadLocal.java:62)
at com.oracle.truffle.polyglot.PolyglotFastThreadLocals.leave(PolyglotFastThreadLocals.java:139)
at com.oracle.truffle.polyglot.PolyglotThreadInfo.leaveInternal(PolyglotThreadInfo.java:158)
at com.oracle.truffle.polyglot.PolyglotEngineImpl.leaveCached(PolyglotEngineImpl.java:2080)
at com.oracle.truffle.polyglot.PolyglotEngineImpl.leave(PolyglotEngineImpl.java:2055)
at com.oracle.truffle.polyglot.PolyglotEngineImpl.leaveIfNeeded(PolyglotEngineImpl.java:1976)
at com.oracle.truffle.polyglot.PolyglotValueDispatch.hostLeave(PolyglotValueDispatch.java:1241)
at com.oracle.truffle.polyglot.PolyglotContextImpl.eval(PolyglotContextImpl.java:1484)
at com.oracle.truffle.polyglot.PolyglotContextDispatch.eval(PolyglotContextDispatch.java:63)
at org.graalvm.polyglot.Context.eval(Context.java:399)
at com.cdk.dna.rules.js.JavascriptExecutorGraal.transform(JavascriptExecutorGraal.java:52)
at com.cdk.dna.rules.js.JavascriptRuleExecutionService.executeAllActiveRules(JavascriptRuleExecutionService.java:107)
at com.cdk.dna.rules.js.JavascriptRuleExecutionService.executeAllActiveRules(JavascriptRuleExecutionService.java:84)
at com.cdk.dna.rules.js.JavascriptRuleExecutionService.executeAllActiveRules(JavascriptRuleExecutionService.java:25)
at com.cdk.dna.rules.js.JavascriptRuleServiceFactory.executeAllActiveRules(JavascriptRuleServiceFactory.java:87)
at com.cdk.dna.rules.js.JavascriptRuleServiceFactory.executeAllActiveRules(JavascriptRuleServiceFactory.java:41)
at com.cdk.dna.rules.kafka.consumers.RuleExecutionTransformer.transform(RuleExecutionTransformer.java:94)
at com.cdk.dna.rules.kafka.consumers.RecordProcessor.processInboundRuleset(RecordProcessor.java:219)
at com.cdk.dna.rules.kafka.consumers.RecordProcessor.lambda$handlePartitionRecords$8(RecordProcessor.java:145)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:960)
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:934)
at java.util.stream.AbstractTask.compute(AbstractTask.java:327)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:754)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:686)
at java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:927)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at com.cdk.dna.rules.kafka.consumers.RecordProcessor.handlePartitionRecords(RecordProcessor.java:149)
at com.cdk.dna.kafka.consumer.SynchronousPartitionRecordsHandlerImpl.lambda$handle$0(SynchronousPartitionRecordsHandlerImpl.java:19)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
at util.TokenAwareRunnable.run(TokenAwareRunnable.java:28)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Instructions regarding code caching are here: https://www.graalvm.org/22.0/reference-manual/embed-languages/#code-caching-across-multiple-contexts
It's written that you must use:
- A shared engine (yes, we do)
- A context configured to that engine (yes, we do, one context per thread in a ThreadLocal)
- A Source object with cached set to true (we explicitly disable code caching by setting this to false, due to previous memory issues – I think because we don't re-use the Source objects)
The javadoc of "cached(boolean)" clearly states that it controls code caching, so we're certain we have the right method.
Enables or disables code caching for this source. By default code
caching is enabled. If true then the source does not
require parsing every time this source is evaluated.
If false then the source requires parsing every time the
source is evaluated but does not remember any state. Disabling caching
may be useful if the source is known to only be evaluated once.
If a source instance is no longer referenced by the client then all code
caches will be freed automatically. Also, if the underlying context or
engine is no longer referenced then cached code for evaluated sources
will be freed automatically.
So bottom line is:
- We appear to be disabling code cache, and we did so to fix a prior memory problem, which was fixed by flipping that flag
- We deployed 22.3.0 and now we get an exception suggesting the code cache is full