cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Gaurav_784295
by New Contributor III
  • 3478 Views
  • 4 replies
  • 1 kudos

pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets

pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/DatasetsGetting this error while writing can any one please tell how we can resolve it

  • 3478 Views
  • 4 replies
  • 1 kudos
Latest Reply
siva-anantha
Contributor
  • 1 kudos

I share the same perspective as @preetmdata on this

  • 1 kudos
3 More Replies
nkrish
by Visitor
  • 35 Views
  • 4 replies
  • 0 kudos

Calling exe from notebook

How to call exe (c-sharp code based) from data bricks notebook?#csharp exe 

  • 35 Views
  • 4 replies
  • 0 kudos
Latest Reply
mukul1409
New Contributor
  • 0 kudos

Hi @nkrish Databricks notebooks cannot run a Windows based C sharp executable. Databricks compute runs on Linux and does not support executing native Windows exe files. Because of this, a C sharp exe cannot be called directly from a Databricks notebo...

  • 0 kudos
3 More Replies
siva_pusarla
by New Contributor II
  • 178 Views
  • 4 replies
  • 0 kudos

workspace notebook path not recognized by dbutils.notebook.run() when running from a workflow/job

result = dbutils.notebooks.run("/Workspace/YourFolder/NotebookA", timeout_seconds=600, arguments={"param1": "value1"}) print(result)I was able to execute the above code manually from a notebook.But when i run the same notebook as a job, it fails stat...

  • 178 Views
  • 4 replies
  • 0 kudos
Latest Reply
siva-anantha
Contributor
  • 0 kudos

@siva_pusarla: We use the following pattern and it works,1) Calling notebook - constant location used by Job.            + src/framework                   + notebook_executor.py2) Callee notebooks - dynamic            + src/app/notebooks             ...

  • 0 kudos
3 More Replies
dj4
by New Contributor II
  • 218 Views
  • 4 replies
  • 2 kudos

Azure Databricks UI consuming way too much memory & laggy

This especially happens when the notebook is large with many cells. Even if I clear all the outputs scrolling the notebook is way too laggy. When I start running the code the memory consumption is 3-4GB minimum even if I am not displaying any data/ta...

  • 218 Views
  • 4 replies
  • 2 kudos
Latest Reply
siva-anantha
Contributor
  • 2 kudos

@dj4: Are you in a corporate proxy environment?Databricks Browser UI uses Web Sockets and sometimes the performance issues happen due to the security checks in the traffic. 

  • 2 kudos
3 More Replies
jeremy98
by Honored Contributor
  • 2989 Views
  • 12 replies
  • 0 kudos

restarting the cluster always running doesn't free the memory?

Hello community,I was working on optimising the driver memory, since there are code that are not optimised for spark, and I was planning temporary to restart the cluster to free up the memory.that could be a potential solution, since if the cluster i...

Screenshot 2025-03-04 at 14.49.44.png
  • 2989 Views
  • 12 replies
  • 0 kudos
Latest Reply
siva-anantha
Contributor
  • 0 kudos

@jeremy98 : Please review the cluster's event logs to understand the trend of the GC related issues. Example in below snapshot.Typically, productive jobs are executed using Job clusters; and they stop as soon as the work is completed. Could you pleas...

  • 0 kudos
11 More Replies
Aravind17
by New Contributor III
  • 21 Views
  • 1 replies
  • 0 kudos

Not received free voucher after completing Data Engineer Associate learning path

I have completed the Data Engineer Associate learning path, but I haven’t received the free certification voucher yet.I’ve already sent multiple emails to the concerned support team regarding this issue, but unfortunately, I haven’t received any resp...

  • 21 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Community Manager
  • 0 kudos

Hello @Aravind17! This post appears to duplicate the one you recently posted. A response has already been provided to your recent post. I recommend continuing the discussion in that thread to keep the conversation focused and organised.

  • 0 kudos
yzhang
by Contributor
  • 2242 Views
  • 6 replies
  • 2 kudos

iceberg with partitionedBy option

I am able to create a UnityCatalog iceberg format table:    df.writeTo(full_table_name).using("iceberg").create()However, if I am adding option partitionedBy I will get an error.  df.writeTo(full_table_name).using("iceberg").partitionedBy("ingest_dat...

  • 2242 Views
  • 6 replies
  • 2 kudos
Latest Reply
LazyGenius
New Contributor II
  • 2 kudos

I found weird behavior here while creating table using SQLIf you are creating new table and have added partition column at the last of the column mapping it won't work but if you add it at the beginning it will work!!For example :-Below query will wo...

  • 2 kudos
5 More Replies
amekojc
by New Contributor II
  • 73 Views
  • 1 replies
  • 1 kudos

How to not make tab headers show when embedding dashboard

When embedding the AI BI dashboard, is there a way to not make the tabs show and instead use our own UI tab to navigate the tabs?Currently, there are two tab headers - one in the databricks dashboard and then another tab section in our embedding webp...

  • 73 Views
  • 1 replies
  • 1 kudos
Latest Reply
mukul1409
New Contributor
  • 1 kudos

Hi @amekojc At the moment, Databricks AI BI Dashboards do not support hiding or disabling the native dashboard tabs when embedding. The embedded dashboard always renders with its own tab headers, and there is no configuration or API to control tab vi...

  • 1 kudos
libpekin
by New Contributor II
  • 115 Views
  • 2 replies
  • 2 kudos

Resolved! Databricks Free Edition - Accessing files in S3

Hello,Attempting read/write files from s3 but got the error below. I am on the free edition (serverless by default). I'm  using access_key and secret_key. Has anyone done this successfully? Thanks!Directly accessing the underlying Spark driver JVM us...

  • 115 Views
  • 2 replies
  • 2 kudos
Latest Reply
libpekin
New Contributor II
  • 2 kudos

Thank @Sanjeeb2024 I was able to confirm as well

  • 2 kudos
1 More Replies
RyanHager
by Contributor
  • 84 Views
  • 0 replies
  • 1 kudos

Liquid Clustering and S3 Performance

Are there any performance concerns when using liquid clustering and AWS S3.  I believe all the parquet files go in the same folder (Prefix in AWS S3 Terms) verses folders per partition when using "partition by".  And there is this note on S3 performa...

  • 84 Views
  • 0 replies
  • 1 kudos
espenol
by New Contributor III
  • 27354 Views
  • 11 replies
  • 13 kudos

input_file_name() not supported in Unity Catalog

Hey, so our notebooks reading a bunch of json files from storage typically use a input_file_name() when moving from raw to bronze, but after upgrading to Unity Catalog we get an error message:AnalysisException: [UC_COMMAND_NOT_SUPPORTED] input_file_n...

  • 27354 Views
  • 11 replies
  • 13 kudos
Latest Reply
ramanpreet
New Contributor
  • 13 kudos

The reason why the 'input_file_name' is not supported because this function was available in older versions of Databricks runtime. It got deprecated from Databricks Runtime 13.3 LTS onwards

  • 13 kudos
10 More Replies
mydefaultlogin
by New Contributor II
  • 860 Views
  • 2 replies
  • 0 kudos

Inconsistent PYTHONPATH, Git folders vs DAB

Hello Databricks Community,I'm encountering an issue related to Python paths when working with notebooks in Databricks. I have a following structure in my project:my_notebooks - my_notebook.py /my_package - __init__.py - hello.py databricks.yml...

  • 860 Views
  • 2 replies
  • 0 kudos
Latest Reply
kenny_hero
New Contributor
  • 0 kudos

I have a related question.I'm new to Databricks platform. I struggle with PYTHONPATH issue as the original poster raised. I understand using sys.path.append(...) is one approach for notebook. This is acceptable for ad-hoc interactive session, but thi...

  • 0 kudos
1 More Replies
bsr
by New Contributor II
  • 180 Views
  • 2 replies
  • 3 kudos

Resolved! DBR 17.3.3 introduced unexpected DEBUG logs from ThreadMonitor – how to disable?

After upgrading from DBR 17.3.2 to DBR 17.3.3, we started seeing a flood of DEBUG logs like this in job outputs:```DEBUG:ThreadMonitor:Logging python thread stack frames for MainThread and py4j threads: DEBUG:ThreadMonitor:Logging Thread-8 (run) stac...

  • 180 Views
  • 2 replies
  • 3 kudos
Latest Reply
bsr
New Contributor II
  • 3 kudos

Thanks for the quick response!

  • 3 kudos
1 More Replies
kALYAN5
by New Contributor
  • 151 Views
  • 4 replies
  • 3 kudos

Service Principal

Can two service principal have same name,but unique id's ?

  • 151 Views
  • 4 replies
  • 3 kudos
Latest Reply
emma_s
Databricks Employee
  • 3 kudos

Hi @kALYAN5,  Here is an explanation of why service principals share a name but IDs are unique: Names Are for Human Readability: Organizations use human-friendly names like "automation-batch-job" or "databricks-ci-cd" to make it easy for admins to re...

  • 3 kudos
3 More Replies
Ligaya
by New Contributor II
  • 57351 Views
  • 7 replies
  • 2 kudos

ValueError: not enough values to unpack (expected 2, got 1)

Code:Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])The problem arises when i try to run the code, in the specified databricks notebook, An error of "ValueError: not enough values to unpack (expected 2, ...

  • 57351 Views
  • 7 replies
  • 2 kudos
Latest Reply
mukul1409
New Contributor
  • 2 kudos

The error happens because the function expects the table name to include both schema and table separated by a dot. Inside the function it splits the table name using a dot and tries to assign two values. When you pass only Economy, the split returns ...

  • 2 kudos
6 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels