Download Spark Driver logs and event logs from Databricks using API
Reference:PY SDK I can't find any API directly to download the logs. Did I miss anything?Otherwise, I can download via DBFS cluster logs location.
- 1345 Views
- 0 replies
- 0 kudos
Reference:PY SDK I can't find any API directly to download the logs. Did I miss anything?Otherwise, I can download via DBFS cluster logs location.
Hi @Cert-TeamI hope this message finds you well. I am writing to request a review of my recently suspended exam. I believe that my situation warrants reconsideration, and I would like to provide some context for your understanding.I applied for Datab...
Hi @Sujitha or @Kaniz Can you please respond to the above query swiftly. As I need to complete my test quickly. Kindly help me to resume or restart my test.Regards,Fasih Ahmed
Hi,My scenario is this: autoloader loads data from ADLS once a day.data are merged into the bronze zone.bronze table has CDF turned on.read CDF and merge into the silver.all works fine, but when there is not new file for autoloader to process, CDF ta...
we have enabled system tables, under system tables we are able to see compute schema, under that we are not able to see cluster and node_type in azure databricks. do we have any limitations for above tables
Hello! I'm following this documentation : https://docs.databricks.com/en/getting-started/community-edition.htmlAnd I can't succeed the step 4) because I don't receive any email to verify my email address. I have checked the spam, no email.Is there so...
@data_game please check the junk mail folder and also message quarantine if you are using M$ services.The verification mail goes out fine. However, it may show up in Junk | Quarantine.
Hi all,I'm trying to do creating one job cluster with one configuration or specification which has a workflow and this workflow needs to have 3 dependent tasks as a straight line. For example, t1->t2->t3. In databricks there are some constraints also...
Hello,We are using DLT pipelines for many of our jobs with notifications on failures to slack.Wondering if there is a clean way to disable the alerts when in development mode. It does make sense to have it turned off in dev, doesn't it?
Hi Team,I have attended the Advantage Lakehouse: Fueling Innovation in the Era of Data and AI webinar.Also completed Databricks Lakehouse Fundamentals and feedback survey, but still I have not received the Databricks voucher.Could you please look i...
1. How to use cloudFiles.backfillInterval option in a notebook?2. Does It need to be any set of the property?3. Where is exactly placed readstream portion of the code or writestream portion of the code?4. Do you have any sample code?5. Where we find ...
1.Is the following code correct for specifying the .option("cloudFiles.backfillInterval", 300)?df = spark.readStream.format("cloudFiles") \.option("cloudFiles.format", "csv") \.option("cloudFiles.schemaLocation", f"dbfs:/FileStore/xyz/back_fill_opti...
Is there a way to set up a workflow with multiple tasks, so that different tasks can share the same compute resource, at the same time?I understand that an instance pool may be an option, here. Wasn't sure if there were other possible options to cons...
I am unable to login in databricks community account, my email was "sureshsuthar1251@gmail.com". When i try to login it says account not found.
I am testing Databricks with non-AWS S3 object storage and want to test it with Databricks on Azure and Databricks on AWS. My Databricks account is currently using Databricks on AWS with metadata and a single node compute running. Can the same accou...
Thank you Kaniz for posting the link. Looking at that, I believe the answer is:This is not possible in Databricks for now.
Hi team,I've faced a disappointing experience during my first certification attempt and need help in resolving the issue.While attending the certification - Databricks data engineer associate on each 2-3 questions I kept receiving a message that the ...
@Cert-Team I have registered for Databricks certified data engineer associate exam and I have done all biometric and other prerequisites required and launched my exam. While writing the exam, it's exited twice with some technical issue although I don...
Thank you for your support. I have given my test.
I have the below schemaschema = StructType([StructField(name="Test",dataType=StringType(),nullable=False,metadata={"description": "This is to test metadata description."})])data = [('Test1',), ('Test2',), ('Test3',)] df = spark.createDataFrame(data, ...
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now| User | Count |
|---|---|
| 133 | |
| 129 | |
| 72 | |
| 57 | |
| 42 |