cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

MrJava
by New Contributor III
  • 10250 Views
  • 16 replies
  • 12 kudos

How to know, who started a job run?

Hi there!We have different jobs/workflows configured in our Databricks workspace running on AWS and would like to know who actually started the job run? Are they started by a user or a service principle using curl?Currently one can only see, who is t...

  • 10250 Views
  • 16 replies
  • 12 kudos
Latest Reply
Ayush_Arora
New Contributor II
  • 12 kudos

The system table solution works only when the job is manually triggered each time. I have a job which is triggered using the job scheduler on databricks. So once someone resumes the trigger, the job goes into execution. After this, the audit tables d...

  • 12 kudos
15 More Replies
DataGeek_JT
by New Contributor II
  • 2858 Views
  • 1 replies
  • 0 kudos

[SQL_CONF_NOT_FOUND] The SQL config "/Volumes/xxx...." canot be found. Please verify that the confi

I am getting the below error when trying to stream data from Azure Storage path to a Delta Live Table ([PATH] is the path to my files which I have redacted here):[SQL_CONF_NOT_FOUND] The SQL config "/Volumes/[PATH]" cannot be found. Please verify tha...

  • 2858 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

I believe you are not setting  spark.conf.set("/Volumes/[PATH]", "your_actual_path_here") hence when you try to get the conf, it fails.  data_source_path = spark.conf.get("/Volumes/[PATH]") "/Volumes/[PATH]" becomes the conf name, you would not want ...

  • 0 kudos
meystingray
by New Contributor II
  • 3887 Views
  • 1 replies
  • 0 kudos

Azure Databricks: Cannot create volumes or tables

If I try to create a Volume, I get this error:Failed to access cloud storage: AbfsRestOperationException exceptionTraceId=fa207c57-db1a-406e-926f-4a7ff0e4afddWhen i try to create a table, I get this error:Error creating table[RequestId=4b8fedcf-24b3-...

  • 3887 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

It seems like you are encountering issues with accessing cloud storage while trying to create a volume and a table in Databricks on Azure. The errors you are seeing, AbfsRestOperationException and INVALID_STATE.UC_CLOUD_STORAGE_ACCESS_FAILURE, indica...

  • 0 kudos
ruoyuqian
by New Contributor II
  • 762 Views
  • 1 replies
  • 0 kudos

dbt writing parquet from Volumes to Catalog schema

I have ran into a weird situation, so I uploaded few parquet files (about 10) for my sales data into the Volume in my catalog, and run dbt againt it , dbt went successful and table was able to be created however when i upload a lot more parquet files...

  • 762 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

When dealing with a large number of Parquet files (about 2500 in your case), the system might be running into resource limitations or timeouts. This can happen due to the sheer volume of data being processed at once. The failure might be due to insuf...

  • 0 kudos
Cami
by Contributor III
  • 1860 Views
  • 2 replies
  • 0 kudos

VIEW JSON result value in view which based on volume

Hello guys!I have the following case:It has been decided that the json file will be read from a following definition ( from volume) , which more or less looks like this: CREATE OR REPLACE VIEW [catalog_name].[schema_name].v_[object_name] AS SELECT r...

  • 1860 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

You must be getting the below error: [CONFIG_NOT_AVAILABLE] Configuration spark.sql.legacy.json.allowEmptyString.enabled is not available. that's because in a warehouse this config is not configurable. SQL editor won't be the best choice for this.   ...

  • 0 kudos
1 More Replies
DylanStout
by New Contributor III
  • 3364 Views
  • 3 replies
  • 0 kudos

UC Volumes: writing xlsx file to volume

How to write a DataFrame to a Volume in a catalog?We tried the following code with our pandas Dataframe:dbutils.fs.put('dbfs:/Volumes/xxxx/default/input_bestanden/x test.xlsx', pandasDf.to_excel('/Volumes/xxxx/default/input_bestanden/x test.xlsx')) T...

  • 3364 Views
  • 3 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

I was able to upload, using  dbutils.fs.cp('/FileStore/excel-1.xlsx', 'dbfs:/Volumes/xxx/default/xxx/x_test.xlsx') Maybe space in the name is causing an issue for you.        

  • 0 kudos
2 More Replies
Akash_Wadhankar
by New Contributor III
  • 102 Views
  • 0 replies
  • 0 kudos

Databricks cluster selection

Compute is one of the largest portions of cost in Databricks ETL. There is not written rule to handle this. Based on experience I have put some thumb rule to set the right cluster. Please check below. https://medium.com/@infinitylearnings1201/a-compr...

  • 102 Views
  • 0 replies
  • 0 kudos
Nastia
by New Contributor III
  • 2123 Views
  • 1 replies
  • 0 kudos

I am getting NoneType error when running a query from API on cluster

When I am running a query on Databricks itself from notebook, it is running fine and giving me results. But the same query when executed from FastAPI (Python, using databricks library) is giving me "TypeError: 'NoneType' object is not iterable".I can...

  • 2123 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hi @Nastia , can you please share the entire stacktrace and the query that you are running.  There is currently not much detail with which I can help you understand this. But it is totally possible it is a bug that's causing this, because there shoul...

  • 0 kudos
ameet9257
by Contributor
  • 409 Views
  • 3 replies
  • 2 kudos

Databricks Job API: The job must have exactly one owner

Hi Team,I'm trying to set the Job Permission using the Databricks Job API but getting the below error.{"error_code": "INVALID_PARAMETER_VALUE","message": "The job must have exactly one owner."} I first tried to get the job permission using the below ...

ameet9257_0-1731984876346.png ameet9257_1-1731985277351.png ameet9257_2-1731985298976.png
  • 409 Views
  • 3 replies
  • 2 kudos
Latest Reply
mnreddy
New Contributor II
  • 2 kudos

Hi , I have tried the same approach but it ddint work for me. Iam using api/2.0 with PUT Request 

  • 2 kudos
2 More Replies
tommyhmt
by New Contributor II
  • 416 Views
  • 1 replies
  • 0 kudos

Delta Live Table missing data

Got a very simple DLT which runs fine, but the final table "a" is missing data.I've found that after goes through a full refresh, if I rerun just the final table, then I get more records (from 1.2m to 1.4m) and the missing data then comes back.When I...

tommyhmt_0-1730972149476.png tommyhmt_1-1730972356391.png
  • 416 Views
  • 1 replies
  • 0 kudos
Latest Reply
NSonam
New Contributor II
  • 0 kudos

To me it seems like timing or dependency issue. The missing data could be due to intermediate tables are not being properly refreshed or triggered during the full refresh. Please check if intermediate tables are being loaded properly before it start ...

  • 0 kudos
Gilg
by Contributor II
  • 6206 Views
  • 2 replies
  • 0 kudos

Pivot in Databricks SQL

Hi Team,I have a table that has a key column (column name) and value column (value of the column name). These values are generated dynamically and wanted to pivot the table.Question 1: Is there a way that we can do this without specifying all the col...

Gilg_0-1695088239719.png
  • 6206 Views
  • 2 replies
  • 0 kudos
Latest Reply
NSonam
New Contributor II
  • 0 kudos

PySpark can help to list the available columns .Please find the demo snippets as below Image 1. Image 2 

  • 0 kudos
1 More Replies
Brianben
by New Contributor III
  • 331 Views
  • 4 replies
  • 1 kudos

Procedure of retrieving archived data from delta table

Hi all,I am currently researching on the archive support features in Databricks. https://docs.databricks.com/en/optimizations/archive-delta.htmlLet say I have enabled archive support and configured the data to be archived after 5 years and I also con...

  • 331 Views
  • 4 replies
  • 1 kudos
Latest Reply
Brianben
New Contributor III
  • 1 kudos

@Walter_C Thank you for your reply. However, there are some part that might need your further clarification.Assume I already set the delta.timeUntilArchived to 1825days (5years) and I have configured the lifecycle policy align with databricks setting...

  • 1 kudos
3 More Replies
AlleyCat
by New Contributor II
  • 167 Views
  • 2 replies
  • 0 kudos

To identify deleted Runs in Workflow.Job UI in "system.lakeflow"

Hi,I executed a few runs in a Workflow.Jobs UI. I then deleted some of them. I am seeing the deleted runs in "system.lakeflow.job_run_timeline". How do i know which runs are the deleted ones? Thanks

  • 167 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

Hi @AlleyCat , Hope you are doing well!  The jobs table includes a delete_time column that records the time when the job was deleted by the user. So to identify deleted jobs, you can run a query like the following: SELECT * FROM system.lakeflow.jobs ...

  • 0 kudos
1 More Replies
nskiran
by New Contributor III
  • 289 Views
  • 3 replies
  • 0 kudos

How to bring in databricks dbacademy courseware

I have created an account in dbacademy and signed up for advanced data engineering with databricks course. Also, I have subscribed to Vocareum lab as well. During the demo, tutor/trainer opened 'ADE 1.1 - Follow Along Demo - Reading from a Streaming ...

  • 289 Views
  • 3 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

So, it appears that we no longer make the notebooks available with self-paced training.  They are not available for download.

  • 0 kudos
2 More Replies
jiteshraut20
by New Contributor III
  • 1073 Views
  • 2 replies
  • 0 kudos

Deploying Overwatch on Databricks (AWS) with System Tables as the Data Source

IntroductionOverwatch is a powerful tool for monitoring and analyzing your Databricks environment, providing insights into resource utilization, cost management, and system performance. By leveraging system tables as the data source, you can gain a c...

  • 1073 Views
  • 2 replies
  • 0 kudos
Latest Reply
raghu2
New Contributor III
  • 0 kudos

hi @jiteshraut20, Thanks for your post. From my set up, validation seems to work.Wrote 32 bytes. Validation report has been saved to dbfs:/mnt/overwatch_global/multi_ow_dep/report/validationReport Validation report details Total validation count: 35 ...

  • 0 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels