cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
cancel
Showing results for 
Search instead for 
Did you mean: 

The model is always stuck in pending state, while the serving status says ready.

haylee
New Contributor II

I am serving a logistic regression model, and I keep getting this error. The issue tends to happen as more data is being modeled, but no matter how much I increase the serving cluster memory, it still error. Here is the stack trace:

22/06/14 15:24:47 WARN TaskSetManager: Lost task 0.0 in stage 17.0 (TID 17) (10.24.7.205 executor driver): TaskResultLost (result lost from block manager)

22/06/14 15:24:47 ERROR TaskSetManager: Task 0 in stage 17.0 failed 1 times; aborting job

22/06/14 15:24:47 ERROR Instrumentation: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 17.0 failed 1 times, most recent failure: Lost task 0.0 in stage 17.0 (TID 17) (10.24.7.205 executor driver): TaskResultLost (result lost from block manager)

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2403)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2352)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2351)

at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)

at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2351)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)

at scala.Option.foreach(Option.scala:407)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)

at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)

at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:476)

at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:429)

at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48)

at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3715)

at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2728)

at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706)

at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)

at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)

at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)

at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)

at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)

at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704)

at org.apache.spark.sql.Dataset.head(Dataset.scala:2728)

at org.apache.spark.sql.Dataset.head(Dataset.scala:2735)

at org.apache.spark.ml.classification.LogisticRegressionModel$LogisticRegressionModelReader.load(LogisticRegression.scala:1352)

at org.apache.spark.ml.classification.LogisticRegressionModel$LogisticRegressionModelReader.load(LogisticRegression.scala:1324)

at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$load$5(Pipeline.scala:277)

at org.apache.spark.ml.MLEvents.withLoadInstanceEvent(events.scala:160)

at org.apache.spark.ml.MLEvents.withLoadInstanceEvent$(events.scala:155)

at org.apache.spark.ml.util.Instrumentation.withLoadInstanceEvent(Instrumentation.scala:42)

at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$load$4(Pipeline.scala:277)

at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)

at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)

at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)

at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)

at scala.collection.TraversableLike.map(TraversableLike.scala:286)

at scala.collection.TraversableLike.map$(TraversableLike.scala:279)

at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)

at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$load$3(Pipeline.scala:274)

at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)

at scala.util.Try$.apply(Try.scala:213)

at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)

at org.apache.spark.ml.Pipeline$SharedReadWrite$.load(Pipeline.scala:268)

at org.apache.spark.ml.PipelineModel$PipelineModelReader.$anonfun$load$7(Pipeline.scala:356)

at org.apache.spark.ml.MLEvents.withLoadInstanceEvent(events.scala:160)

at org.apache.spark.ml.MLEvents.withLoadInstanceEvent$(events.scala:155)

at org.apache.spark.ml.util.Instrumentation.withLoadInstanceEvent(Instrumentation.scala:42)

at org.apache.spark.ml.PipelineModel$PipelineModelReader.$anonfun$load$6(Pipeline.scala:355)

at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)

at scala.util.Try$.apply(Try.scala:213)

at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)

at org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:355)

at org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:349)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)

at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)

at py4j.Gateway.invoke(Gateway.java:282)

at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)

at py4j.commands.CallCommand.execute(CallCommand.java:79)

at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)

at py4j.ClientServerConnection.run(ClientServerConnection.java:106)

at java.lang.Thread.run(Thread.java:748)

0 REPLIES 0
Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.