cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Cami
by Contributor III
  • 2608 Views
  • 2 replies
  • 0 kudos

VIEW JSON result value in view which based on volume

Hello guys!I have the following case:It has been decided that the json file will be read from a following definition ( from volume) , which more or less looks like this: CREATE OR REPLACE VIEW [catalog_name].[schema_name].v_[object_name] AS SELECT r...

  • 2608 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

You must be getting the below error: [CONFIG_NOT_AVAILABLE] Configuration spark.sql.legacy.json.allowEmptyString.enabled is not available. that's because in a warehouse this config is not configurable. SQL editor won't be the best choice for this.   ...

  • 0 kudos
1 More Replies
DylanStout
by Contributor
  • 5381 Views
  • 3 replies
  • 0 kudos

UC Volumes: writing xlsx file to volume

How to write a DataFrame to a Volume in a catalog?We tried the following code with our pandas Dataframe:dbutils.fs.put('dbfs:/Volumes/xxxx/default/input_bestanden/x test.xlsx', pandasDf.to_excel('/Volumes/xxxx/default/input_bestanden/x test.xlsx')) T...

  • 5381 Views
  • 3 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

I was able to upload, using  dbutils.fs.cp('/FileStore/excel-1.xlsx', 'dbfs:/Volumes/xxx/default/xxx/x_test.xlsx') Maybe space in the name is causing an issue for you.        

  • 0 kudos
2 More Replies
Akash_Wadhankar
by New Contributor III
  • 293 Views
  • 0 replies
  • 0 kudos

Databricks cluster selection

Compute is one of the largest portions of cost in Databricks ETL. There is not written rule to handle this. Based on experience I have put some thumb rule to set the right cluster. Please check below. https://medium.com/@infinitylearnings1201/a-compr...

  • 293 Views
  • 0 replies
  • 0 kudos
IshaBudhiraja
by New Contributor II
  • 2365 Views
  • 4 replies
  • 0 kudos

Migration of Synapse Data bricks activity executions from All purpose cluster to New job cluster

Hi,We have been planning to migrate the Synapse Data bricks activity executions from 'All-purpose cluster' to 'New job cluster' to reduce overall cost. We are using Standard_D3_v2 as cluster node type that has 4 CPU cores in total. The current quota ...

IshaBudhiraja_0-1711688756158.png
  • 2365 Views
  • 4 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

I also see a difference in Photon, Enable Photon for workloads with large data scans, joins, aggregations, and decimal computations. Photon provides significant performance benefits over the standard Databricks Runtime.

  • 0 kudos
3 More Replies
Nastia
by New Contributor III
  • 3121 Views
  • 1 replies
  • 0 kudos

I am getting NoneType error when running a query from API on cluster

When I am running a query on Databricks itself from notebook, it is running fine and giving me results. But the same query when executed from FastAPI (Python, using databricks library) is giving me "TypeError: 'NoneType' object is not iterable".I can...

  • 3121 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hi @Nastia , can you please share the entire stacktrace and the query that you are running.  There is currently not much detail with which I can help you understand this. But it is totally possible it is a bug that's causing this, because there shoul...

  • 0 kudos
ameet9257
by Contributor
  • 1257 Views
  • 3 replies
  • 2 kudos

Databricks Job API: The job must have exactly one owner

Hi Team,I'm trying to set the Job Permission using the Databricks Job API but getting the below error.{"error_code": "INVALID_PARAMETER_VALUE","message": "The job must have exactly one owner."} I first tried to get the job permission using the below ...

ameet9257_0-1731984876346.png ameet9257_1-1731985277351.png ameet9257_2-1731985298976.png
  • 1257 Views
  • 3 replies
  • 2 kudos
Latest Reply
NR_Modugula
New Contributor II
  • 2 kudos

Hi , I have tried the same approach but it ddint work for me. Iam using api/2.0 with PUT Request 

  • 2 kudos
2 More Replies
Gilg
by Contributor II
  • 7474 Views
  • 2 replies
  • 0 kudos

Pivot in Databricks SQL

Hi Team,I have a table that has a key column (column name) and value column (value of the column name). These values are generated dynamically and wanted to pivot the table.Question 1: Is there a way that we can do this without specifying all the col...

Gilg_0-1695088239719.png
  • 7474 Views
  • 2 replies
  • 0 kudos
Latest Reply
NSonam
New Contributor II
  • 0 kudos

PySpark can help to list the available columns .Please find the demo snippets as below Image 1. Image 2 

  • 0 kudos
1 More Replies
Brianben
by New Contributor III
  • 1010 Views
  • 4 replies
  • 1 kudos

Procedure of retrieving archived data from delta table

Hi all,I am currently researching on the archive support features in Databricks. https://docs.databricks.com/en/optimizations/archive-delta.htmlLet say I have enabled archive support and configured the data to be archived after 5 years and I also con...

  • 1010 Views
  • 4 replies
  • 1 kudos
Latest Reply
Brianben
New Contributor III
  • 1 kudos

@Walter_C Thank you for your reply. However, there are some part that might need your further clarification.Assume I already set the delta.timeUntilArchived to 1825days (5years) and I have configured the lifecycle policy align with databricks setting...

  • 1 kudos
3 More Replies
AlleyCat
by New Contributor II
  • 629 Views
  • 2 replies
  • 0 kudos

To identify deleted Runs in Workflow.Job UI in "system.lakeflow"

Hi,I executed a few runs in a Workflow.Jobs UI. I then deleted some of them. I am seeing the deleted runs in "system.lakeflow.job_run_timeline". How do i know which runs are the deleted ones? Thanks

  • 629 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

Hi @AlleyCat , Hope you are doing well!  The jobs table includes a delete_time column that records the time when the job was deleted by the user. So to identify deleted jobs, you can run a query like the following: SELECT * FROM system.lakeflow.jobs ...

  • 0 kudos
1 More Replies
nskiran
by New Contributor III
  • 885 Views
  • 3 replies
  • 0 kudos

How to bring in databricks dbacademy courseware

I have created an account in dbacademy and signed up for advanced data engineering with databricks course. Also, I have subscribed to Vocareum lab as well. During the demo, tutor/trainer opened 'ADE 1.1 - Follow Along Demo - Reading from a Streaming ...

  • 885 Views
  • 3 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

So, it appears that we no longer make the notebooks available with self-paced training.  They are not available for download.

  • 0 kudos
2 More Replies
jiteshraut20
by New Contributor III
  • 1939 Views
  • 2 replies
  • 0 kudos

Deploying Overwatch on Databricks (AWS) with System Tables as the Data Source

IntroductionOverwatch is a powerful tool for monitoring and analyzing your Databricks environment, providing insights into resource utilization, cost management, and system performance. By leveraging system tables as the data source, you can gain a c...

  • 1939 Views
  • 2 replies
  • 0 kudos
Latest Reply
raghu2
New Contributor III
  • 0 kudos

hi @jiteshraut20, Thanks for your post. From my set up, validation seems to work.Wrote 32 bytes. Validation report has been saved to dbfs:/mnt/overwatch_global/multi_ow_dep/report/validationReport Validation report details Total validation count: 35 ...

  • 0 kudos
1 More Replies
johnnwanosike
by New Contributor III
  • 1071 Views
  • 6 replies
  • 0 kudos

Hive metastore federation, internal and external unable to connect

I enabled the internal hive on the metastore federation using this  query commandCREATE CONNECTION IF NOT EXISTS internal-hive TYPE hive_metastoreOPTIONS (builtin true);But I can't get a password or username to access the JDBC URL. 

  • 1071 Views
  • 6 replies
  • 0 kudos
Latest Reply
johnnwanosike
New Contributor III
  • 0 kudos

Not really, what I want to achieve is connecting to an external hive but I do want to configure the external hive on our server to be able to interact with the Databricks cluster in such a way that I could have access to thrift protocol.

  • 0 kudos
5 More Replies
_deepak_
by New Contributor II
  • 3228 Views
  • 4 replies
  • 0 kudos

Databricks regression test suite

Hi, I am new to Databricks and setting up the non-prod environment. I am wanted to know, IS there any way by which I can run a regression suite so that existing setup should not break in case of any feature addition and also how can I make available ...

  • 3228 Views
  • 4 replies
  • 0 kudos
Latest Reply
grkseo7
New Contributor II
  • 0 kudos

Regression testing after code changes can be automated easily. Once you’ve created test cases with Pytest or Great Expectations, you can set up a CI/CD pipeline using tools like Jenkins or GitHub Actions. For a non-prod setup, Docker is great for rep...

  • 0 kudos
3 More Replies
hari-prasad
by Valued Contributor II
  • 644 Views
  • 3 replies
  • 1 kudos

Optimize Cluster Uptime by Avoiding Unwanted Library or Jar Installations

Whenever we discuss clusters or nodes in any service, we need to address the cluster bootstrap process. Traditionally, this involves configuring each node using a startup script (startup.sh).In this context, installing libraries in the cluster is par...

Data Engineering
cluster
job
jobs
Nodes
  • 644 Views
  • 3 replies
  • 1 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 1 kudos

I'm sharing my experience here. Thank you for follow up!

  • 1 kudos
2 More Replies
korijn
by New Contributor II
  • 706 Views
  • 1 replies
  • 0 kudos

How to set environment (client) on notebook via API/Terraform provider?

I am deploying a job with a notebook task via the Terraform provider. I want to set the client version to 2. I do NOT need to install any dependencies. I just want to use the new client version for the serverless compute. How do I do this with the Te...

  • 706 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Unfortunately, there is no direct way to set the client version for a notebook task via the Terraform provider or the API without using the UI. The error message suggests that the %pip magic command is the recommended approach for installing dependen...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels