cancel
Showing results for 
Search instead for 
Did you mean: 
kjoth
Contributor II
since ‎10-12-2021
‎06-26-2023

User Stats

  • 23 Posts
  • 0 Solutions
  • 16 Kudos given
  • 10 Kudos received

User Activity

We have created an unmanaged table with partitions on the dbfs location, using SQL.example: %sql CREATE TABLE EnterpriseDailyTrafficSummarytest(EnterpriseID String,ServiceLocationID String, ReportDate String ) USING parquet PARTITIONED BY(ReportDate)...
Hi , We are capturing the exception if an error occurs using try except. But we want the job status to be failed once we got the exception. Whats the best way to do that. We are using pyspark.
I'm running a scheduled job on Job clusters. I didnt mention the log location for the cluster. Where can we get the stored logs location. Yes, I can see the logs in the runs, but i need the logs location.
How to set up this value? Is this any value we can provide or the default value we have to p#!/bin/bash   keystore_file="/dbfs/<keystore_directory>/jetty_ssl_driver_keystore.jks" keystore_password="gb1gQqZ9ZIHS" sasl_secret=$(sha256sum $keystore_file...
I'm using the logging module to log the events from the job, but it seems the log is creating the file with only 1 lines. The consecutive log events are not being recorded. Is there any reference for custom logging in Databricks.