cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

elgeo
by Valued Contributor II
  • 3267 Views
  • 6 replies
  • 8 kudos

Clean up _delta_log files

Hello experts. We are trying to clarify how to clean up the large amount of files that are being accumulated in the _delta_log folder (json, crc and checkpoint files). We went through the related posts in the forum and followed the below:SET spark.da...

  • 3267 Views
  • 6 replies
  • 8 kudos
Latest Reply
Brad
Contributor II
  • 8 kudos

Awesome, thanks for response.

  • 8 kudos
5 More Replies
Abel_Martinez
by Contributor
  • 16945 Views
  • 10 replies
  • 38 kudos

Why Python logs shows [REDACTED] literal in spaces when I use dbutils.secrets.get in my code?

When I use  dbutils.secrets.get in my code, spaces in the log are replaced by "[REDACTED]" literal. This is very annoying and makes the log reading difficult. Any idea how to avoid this?See my screenshot...

  • 16945 Views
  • 10 replies
  • 38 kudos
Latest Reply
jlb0001
New Contributor III
  • 38 kudos

I ran into the same issue and found that the reason was that the notebook included some test keys with values of "A" and "B" for simple testing. I noticed that any string with a substring of "A" or "B" was "[REDACTED]".​So, in my case, it was an eas...

  • 38 kudos
9 More Replies
vinaykumar
by New Contributor III
  • 5702 Views
  • 6 replies
  • 0 kudos

Log files are not getting deleted automatically after logRetentionDuration internal

Hi team Log files are not getting deleted automatically after logRetentionDuration internal from delta log folder and after analysis , I see checkpoint files are not getting created after 10 commits . Below table properties using spark.sql(    f"""  ...

No checkpoint.parquet
  • 5702 Views
  • 6 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @vinay kumar​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...

  • 0 kudos
5 More Replies
Kaijser
by New Contributor II
  • 3939 Views
  • 3 replies
  • 1 kudos

Logging clogged up with error messages (OSError: [Errno 95] Operation not supported, --- Logging error ---)

I have encountered this issue for a while now and it happens each run that is triggered. I discovered 2 things:1) If I run my script on a cluster that is not active and the cluster is activated by a scheduled trigger (not manually!) this doesn't happ...

  • 3939 Views
  • 3 replies
  • 1 kudos
Latest Reply
manasa
Contributor
  • 1 kudos

Hi @Aaron Kaijser​ Are you able to your logfile to ADLS?If yes, could you please explain how you did it

  • 1 kudos
2 More Replies
Nilofar
by New Contributor II
  • 3088 Views
  • 6 replies
  • 0 kudos

i am not able to reset the password for data bricks cloud community

Hi,i am not log in to https://community.cloud.databricks.com/login.html. Please assist .

  • 3088 Views
  • 6 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Nilofar Sharma​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answer...

  • 0 kudos
5 More Replies
Therdpong
by New Contributor III
  • 1822 Views
  • 2 replies
  • 0 kudos

how to check what jobs cluster to have expanddisk.

We would like to know how to check what jobs cluster to have to expand disk.

  • 1822 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

You can check in the cluster's event logs. You can type in the search box, "disk" and you will see all the events there.

  • 0 kudos
1 More Replies
dibo
by New Contributor II
  • 694 Views
  • 0 replies
  • 0 kudos

I can't login to https://community.cloud.databricks.com/login.html

Now, I can't login to https://community.cloud.databricks.com/login.html with the correct username and password, later I click the button to reset my password and I receive the email for modifying password, I have modified password, But I still can't ...

  • 694 Views
  • 0 replies
  • 0 kudos
chandan_a_v
by Valued Contributor
  • 9837 Views
  • 8 replies
  • 6 kudos

Resolved! logging.basicConfig not creating a file in Databricks

Hi,I am using the logger to log some parameters in my code and I want to save the file under DBFS. But for some reason the file is not getting created under DBFS. If I clear the state of the notebook and check the DBFS dir then file is present. Pleas...

  • 9837 Views
  • 8 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Perhaps PyCharm sets a different working directory, meaning the file ends up in another place. Try providing a full path.

  • 6 kudos
7 More Replies
kjoth
by Contributor II
  • 6778 Views
  • 4 replies
  • 3 kudos

Resolved! Pyspark logging - custom to Azure blob mount directory

I'm using the logging module to log the events from the job, but it seems the log is creating the file with only 1 lines. The consecutive log events are not being recorded. Is there any reference for custom logging in Databricks.

  • 6778 Views
  • 4 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

@karthick J​ - If Jose's answer helped solve the issue, would you be happy to mark their answer as best so that others can find the solution more easily?

  • 3 kudos
3 More Replies
MGH1
by New Contributor III
  • 5757 Views
  • 5 replies
  • 3 kudos

Resolved! how to log the KerasClassifier model in a sklearn pipeline in mlflow?

I have a set of pre-processing stages in a sklearn `Pipeline` and an estimator which is a `KerasClassifier` (`from tensorflow.keras.wrappers.scikit_learn import KerasClassifier`).My overall goal is to tune and log the whole sklearn pipeline in `mlflo...

  • 5757 Views
  • 5 replies
  • 3 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 3 kudos

could you please share the full error stack trace?

  • 3 kudos
4 More Replies
brickster_2018
by Databricks Employee
  • 16206 Views
  • 1 replies
  • 2 kudos

Resolved! How do I change the log level in Databricks?

How can I change the log level of the Spark Driver and executor process?

  • 16206 Views
  • 1 replies
  • 2 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 2 kudos

Change the log level of Driver:%scala   spark.sparkContext.setLogLevel("DEBUG")   spark.sparkContext.setLogLevel("INFO")Change the log level of a particular package in Driver logs:%scala   org.apache.log4j.Logger.getLogger("shaded.databricks.v201809...

  • 2 kudos
Labels