cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DLT live tables error while reading file from datalake gen2

skirock
New Contributor

I am getting following error while running cell in python.  Same file is run fine when i upload json file into databrics and then give this path to df.read syntex while reading it.   When i use DLT for same file which is in datalake it gives me following error..

 

py4j.Py4JException: An exception was raised by the Python Proxy. Return Message: Traceback (most recent call last): File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/clientserver.py", line 642, in _call_proxy return_value = getattr(self.pool[obj_id], method)(*params) File "/databricks/spark/python/dlt/helpers.py", line 29, in call res = self.func() File "/home/spark-1527d41b-48fc-410a-adfe-cb/.ipykernel/3836/command-3092999216034608-2281310095", line 36, in employeedays df = spark.readStream.format("cloudFiles").option("cloudFiles.format", "json").option("cloudFiles.inferColumnTypes", "true").load(f"abfss://raw@dlsanalyticswe01p.dfs.core.windows.net/{source}/{schema}/{file_name}") File "/databricks/spark/python/pyspark/sql/streaming/readwriter.py", line 287, in load return self._df(self._jreader.load(path)) File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1355, in __call__ return_value = get_return_value( File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco return f(*a, **kw) File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o452.load. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 28.0 failed 4 times, most recent failure: Lost task 0.3 in stage 28.0 (TID 65) (10.210.112.4 executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Command exited with code 50 Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3681) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3603) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3590) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3590) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1535) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1535) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1535) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3926) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3838) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3826) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:51) at org.apache.spark.scheduler.DAGScheduler.$anonfun$runJob$1(DAGScheduler.scala:1259) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1247) at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2966) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2949) at org.apache.spark.SparkContext.runJob(SparkContext.scala:3061) at org.apache.spark.sql.catalyst.json.JsonInferSchema.infer(JsonInferSchema.scala:133) at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$.$anonfun$inferFromDataset$1(JsonDataSource.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:528) at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$.inferFromDataset(JsonDataSource.scala:125) at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$.infer(JsonDataSource.scala:116) at org.apache.spark.sql.execution.datasources.json.JsonDataSource.inferSchema(JsonDataSource.scala:82) at org.apache.spark.sql.execution.datasources.json.JsonFileFormat.inferSchema(JsonFileFormat.scala:61) at com.databricks.sql.cloudfiles.utils.SchemaInferenceUtils$.$anonfun$getOrUpdatePersistedSchema$12(SchemaInferenceUtils.scala:634) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.cloudfiles.utils.SchemaInferenceUtils$.$anonfun$getOrUpdatePersistedSchema$10(SchemaInferenceUtils.scala:635) at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:410) at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:396) at com.databricks.spark.util.SparkDatabricksProgressReporter$.withStatusCode(ProgressReporter.scala:34) at com.databricks.sql.cloudfiles.utils.SchemaInferenceUtils$.getOrUpdatePersistedSchema(SchemaInferenceUtils.scala:630) at com.databricks.sql.cloudfiles.utils.SchemaInferenceUtils$.$anonfun$inferSchema$1(SchemaInferenceUtils.scala:820) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.cloudfiles.utils.SchemaInferenceUtils$.inferSchema(SchemaInferenceUtils.scala:791) at com.databricks.sql.cloudfiles.utils.SchemaInferenceUtils$.checkAndInferSchema(SchemaInferenceUtils.scala:888) at com.databricks.sql.fileNotification.autoIngest.CloudFilesSourceProvider.determineSchema(CloudFilesSourceProvider.scala:58) at com.databricks.sql.fileNotification.autoIngest.CloudFilesSourceProvider.sourceSchema(CloudFilesSourceProvider.scala:85) at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:266) at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:150) at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:150) at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:39) at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:221) at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:265) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397) at py4j.Gateway.invoke(Gateway.java:306) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.sendCommand(ClientServerConnection.java:261) at py4j.CallbackClient.sendCommand(CallbackClient.java:384) at py4j.CallbackClient.sendCommand(CallbackClient.java:356) at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:106) at com.sun.proxy.$Proxy162.call(Unknown Source) at com.databricks.pipelines.Pipeline$DatasetBuilderImpl.$anonfun$query$2(Pipeline.scala:434) at com.databricks.pipelines.Pipeline$$anon$1.$anonfun$call$3(Pipeline.scala:1262) at com.databricks.pipelines.Pipeline.withContext(Pipeline.scala:135) at com.databricks.pipelines.Pipeline$$anon$1.$anonfun$call$2(Pipeline.scala:1262) at scala.util.Try$.apply(Try.scala:213) at com.databricks.pipelines.Pipeline$$anon$1.call(Pipeline.scala:1261) at com.databricks.pipelines.graph.FlowFunction$$anon$3.call(Flow.scala:297) at com.databricks.pipelines.graph.FlowFunction.callWithCache(Flow.scala:249) at com.databricks.pipelines.graph.FlowFunction.callWithCache$(Flow.scala:232) at com.databricks.pipelines.graph.FlowFunction$$anon$3.callWithCache(Flow.scala:293) at com.databricks.pipelines.graph.Flow.flowFuncResult(Flow.scala:109) at com.databricks.pipelines.graph.Flow.flowFuncResult$(Flow.scala:107) at com.databricks.pipelines.graph.UnresolvedFlow.flowFuncResult$lzycompute(Flow.scala:377) at com.databricks.pipelines.graph.UnresolvedFlow.flowFuncResult(Flow.scala:377) at com.databricks.pipelines.graph.Flow.failure(Flow.scala:176) at com.databricks.pipelines.graph.Flow.failure$(Flow.scala:176) at com.databricks.pipelines.graph.UnresolvedFlow.failure(Flow.scala:377) at com.databricks.pipelines.graph.Flow.resolved(Flow.scala:200) at com.databricks.pipelines.graph.Flow.resolved$(Flow.scala:200) at com.databricks.pipelines.graph.UnresolvedFlow.resolved(Flow.scala:377) at com.databricks.pipelines.graph.DataflowGraph.$anonfun$resolve$10(DataflowGraph.scala:392) at com.databricks.pipelines.graph.DltApiUsageLogging$.$anonfun$recordPipelinesOperation$3(DltApiUsageLogging.scala:50) at com.databricks.pipelines.graph.DltApiUsageLogging$.$anonfun$recordPipelinesOperation$4(DltApiUsageLogging.scala:62) at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:571) at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:667) at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:685) at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:426) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:196) at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:424) at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418) at com.databricks.pipelines.graph.DltApiUsageLogging$.withAttributionContext(DltApiUsageLogging.scala:16) at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:470) at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:455) at com.databricks.pipelines.graph.DltApiUsageLogging$.withAttributionTags(DltApiUsageLogging.scala:16) at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:662) at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:580) at com.databricks.pipelines.graph.DltApiUsageLogging$.recordOperationWithResultTags(DltApiUsageLogging.scala:16) at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:571) at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:540) at com.databricks.pipelines.graph.DltApiUsageLogging$.recordOperation(DltApiUsageLogging.scala:16) at com.databricks.pipelines.graph.DltApiUsageLogging$.$anonfun$recordPipelinesOperation$1(DltApiUsageLogging.scala:62) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.pipelines.graph.DltApiUsageLogging$.recordPipelinesOperation(DltApiUsageLogging.scala:41) at com.databricks.pipelines.graph.DataflowGraph.$anonfun$resolve$1(DataflowGraph.scala:391) at com.databricks.pipelines.graph.DltApiUsageLogging$.$anonfun$recordPipelinesOperation$3(DltApiUsageLogging.scala:50) at com.databricks.pipelines.graph.DltApiUsageLogging$.$anonfun$recordPipelinesOperation$4(DltApiUsageLogging.scala:62) at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:571) at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:667) at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:685) at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:426) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:196) at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:424) at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418) at com.databricks.pipelines.graph.DltApiUsageLogging$.withAttributionContext(DltApiUsageLogging.scala:16) at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:470) at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:455) at com.databricks.pipelines.graph.DltApiUsageLogging$.withAttributionTags(DltApiUsageLogging.scala:16) at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:662) at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:580) at com.databricks.pipelines.graph.DltApiUsageLogging$.recordOperationWithResultTags(DltApiUsageLogging.scala:16) at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:571) at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:540) at com.databricks.pipelines.graph.DltApiUsageLogging$.recordOperation(DltApiUsageLogging.scala:16) at com.databricks.pipelines.graph.DltApiUsageLogging$.$anonfun$recordPipelinesOperation$1(DltApiUsageLogging.scala:62) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.pipelines.graph.DltApiUsageLogging$.recordPipelinesOperation(DltApiUsageLogging.scala:41) at com.databricks.pipelines.graph.DataflowGraph.resolve(DataflowGraph.scala:330) at com.databricks.pipelines.execution.core.AnalysisHandler.$anonfun$analyzeWithSparkConfOverrides$1(AnalysisHandler.scala:292) at com.databricks.pipelines.util.SparkSessionUtils$.withSQLConf(SparkSessionUtils.scala:19) at com.databricks.pipelines.util.SparkSessionUtils$.withSparkSession(SparkSessionUtils.scala:50) at com.databricks.pipelines.execution.core.AnalysisHandler.withAdditionalSparkConfs(AnalysisHandler.scala:121) at com.databricks.pipelines.execution.core.AnalysisHandler.analyzeWithSparkConfOverrides(AnalysisHandler.scala:245) at com.databricks.pipelines.execution.core.AnalysisHandler.analyze(AnalysisHandler.scala:320) at com.databricks.pipelines.execution.core.AnalysisHandler.analyze$(AnalysisHandler.scala:316) at com.databricks.pipelines.execution.core.SynchronousUpdate$.analyze(LocalMesaEngine.scala:526) at com.databricks.pipelines.execution.core.SynchronousUpdate.analyze(LocalMesaEngine.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397) at py4j.Gateway.invoke(Gateway.java:306) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199) at py4j.ClientServerConnection.run(ClientServerConnection.java:119) at java.lang.Thread.run(Thread.java:750)

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager
 
Hi @skirock

If you need further assistance or have additional questions, feel free to ask! ๐Ÿ˜Š

 

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!