cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Rishitha
by New Contributor III
  • 5003 Views
  • 3 replies
  • 0 kudos

Delta live tables straming

I'm trying to addmonotonicallyIncreasingId() column to a streaming table and I see the following errorFailed to start stream [table_name] in either append mode or complete mode. Append mode error: Expression(s): monotonically_increasing_id() is not s...

  • 5003 Views
  • 3 replies
  • 0 kudos
Latest Reply
Niro
New Contributor II
  • 0 kudos

Is aggregations with row_number() combined with a SQL window function and a watermark still supported in Databricks 14.3?

  • 0 kudos
2 More Replies
Brad
by Contributor II
  • 5956 Views
  • 5 replies
  • 0 kudos

Is there a way to control the cluster runtime version for DLT

Hi team, When I create a DLT job, is there a way to control the cluster runtime version somewhere? E.g. I want to use 14.3 LTS. I tried to add `"spark_version": "14.3.x-scala2.12",` inside cluster default label but not work.Thanks

  • 5956 Views
  • 5 replies
  • 0 kudos
Latest Reply
Brad
Contributor II
  • 0 kudos

Thanks. Got it.And the cluster has to be share mode. Can different DLT jobs share clusters or when DLT job is running, can other people use the cluster? Seems each DLT job running will start a new cluster. If it is not be able to shared, why it has t...

  • 0 kudos
4 More Replies
pjp94
by Contributor
  • 2034 Views
  • 1 replies
  • 0 kudos

pyspark.pandas PandasNotImplementedError

Can someone explain why this below code is throwing an error? My intuition is telling me it's my spark version (3.2.1) but would like confirmation:d = {'key':['a','a','c','d','e','f','g','h'], 'data':[1,2,3,4,5,6,7,8]} x = ps.DataFrame(d) x[x['...

  • 2034 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@pjp94  - The error indicates the pandas pyspark implementation does not have the below method implemented. pd.Series.duplicated() Next steps is to use dataframe methods such as distinct, groupBy, dropDuplicates to resolve this.

  • 0 kudos
User_1611
by New Contributor
  • 2392 Views
  • 1 replies
  • 0 kudos

TimeoutException: Stream Execution thread for stream [xxxxxx]failed to stop within 15000 millisecond

TimeoutException: Stream Execution thread for stream [id = xxx runId = xxxx] failed to stop within 15000 milliseconds (specified by spark.sql.streaming.stopTimeout). See the cause on what was being executed in the streaming query thread.I have a data...

  • 2392 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@User_1611  - could you please try the following ? Reduce the number of streaming queries running on the same clusterMake sure your code does not try to re-trigger/start an active streaming queryMake sure to collect the thread dumps if this error hap...

  • 0 kudos
Shan1
by New Contributor II
  • 6365 Views
  • 5 replies
  • 0 kudos

Read large volume of parquet files

I have 50k + parquet files in the in azure datalake and i have mount point as well. I need to read all the files and load into a dataframe. i have around 2 billion records in total and all the files are not having all the columns, column order may di...

  • 6365 Views
  • 5 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@Shan1 - This could be due to the files have cols that differ by data type.  Eg. Integer vs long , Boolean vs integer. can be resolved by schemaMerge=False. Please refer to this code.  https://github.com/apache/spark/blob/418bba5ad6053449a141f3c9c31e...

  • 0 kudos
4 More Replies
Chandraw
by New Contributor III
  • 3715 Views
  • 2 replies
  • 0 kudos

Resolved! Malformed Input Exception while saving or retreiving Table

Hi everyone,I am using DBR version 13 and Managed tables in a custom catalog location of table is AWS S3.running notebook on single user clusterI am facing MalformedInputException while saving data to Tables or reading it.When I am running my noteboo...

  • 3715 Views
  • 2 replies
  • 0 kudos
Latest Reply
Chandraw
New Contributor III
  • 0 kudos

@Retired_mod  The issue is resolved as soon as I deployed it to mutlinode dev cluster.Issue is only occurring in single user clusters. Looks like limitation of running all updates in one node as distributed system.

  • 0 kudos
1 More Replies
BerkerKozan
by New Contributor III
  • 2972 Views
  • 2 replies
  • 1 kudos

Creating All Purpose Cluster in Data Asset Bundles

There is no resource to create All Purpose Cluster, but I need it, so does it mean I should create it via Terraform or DBX and reference to it, which I dont prefer?

  • 2972 Views
  • 2 replies
  • 1 kudos
Latest Reply
BerkerKozan
New Contributor III
  • 1 kudos

Hello @Ayushi_Suthar, Thanks for the quick reply! Where can I see these requests?https://ideas.databricks.com/ideas/DB-I-9451 ? 

  • 1 kudos
1 More Replies
Andriy
by New Contributor II
  • 8327 Views
  • 2 replies
  • 1 kudos

Get Job Run Status

Is there a way to get a child Job Run status and show the result within the parent notebook execution?Here is the case: I have a master notebook and several child notebooks. As a result, I want to see which notebook failed: For example Notebook job s...

Screenshot 2024-02-06 at 17.41.51.png
  • 8327 Views
  • 2 replies
  • 1 kudos
Latest Reply
BR_DatabricksAI
Contributor III
  • 1 kudos

Hello, Are you also managing any return status while calling the notebook. Have a look the following reference URL : Run a Databricks notebook from another notebook | Databricks on AWS 

  • 1 kudos
1 More Replies
anupam676
by New Contributor II
  • 4178 Views
  • 2 replies
  • 1 kudos

Resolved! How can I enable disk cache in this scenario/

I have a notebook where I read multiple tables from delta lake (let say schema is db) and after that I did some sort of transformation (image enclosed) using all these tables lwith transformations like join,filter etc. After transformation and writin...

  • 4178 Views
  • 2 replies
  • 1 kudos
Latest Reply
anupam676
New Contributor II
  • 1 kudos

Thank you @shan_chandra 

  • 1 kudos
1 More Replies
vroste
by New Contributor III
  • 2517 Views
  • 1 replies
  • 1 kudos

Delta live tables running count output mode?

I have a DLT with a table that I want to contain the running aggregation (for the sake of simplicitly let's assume it's a count) for each value of some key column, using a session window. The input table goes back several years and to clean up aggreg...

  • 2517 Views
  • 1 replies
  • 1 kudos
luisvasv
by New Contributor II
  • 21726 Views
  • 5 replies
  • 2 kudos

Init script problems | workspace location

At this moment, I'm working on removing Legacy global and cluster-named init scripts due, it will be disabled for all workspaces on 01 Sept.At this moment, I'm facing a strange problem regarding moving init scripts from dbfs to the Workspace location...

image.png image
  • 21726 Views
  • 5 replies
  • 2 kudos
Latest Reply
DE-cat
New Contributor III
  • 2 kudos

Using the new CLI v0.214, uploading ".sh" file works fine.`databricks workspace import --overwrite --format AUTO --file init_setup /init/user/job/init_setup`

  • 2 kudos
4 More Replies
Gauthy1825
by New Contributor II
  • 9777 Views
  • 9 replies
  • 3 kudos

How to write to Salesforce from Databricks using the spark salesforce library

Hi, Im facing an issue while writing to Salesforce sandbox from Databricks. I have installed the "spark-salesforce_2.12-1.1.4" library and my code is as follows:-df_newLeads.write\      .format("com.springml.spark.salesforce")\      .option("username...

  • 9777 Views
  • 9 replies
  • 3 kudos
Latest Reply
addy
New Contributor III
  • 3 kudos

I made a function that used the code below and returned url, connectionProperties, sfwriteurl ="https://login.salesforce.com/"dom = url.split('//')[1].split('.')[0]session_id, instance = SalesforceLogin(username=connectionProperties['name'], password...

  • 3 kudos
8 More Replies
Heisenberg
by New Contributor II
  • 3261 Views
  • 2 replies
  • 1 kudos

Migrate a workspace from one AWS account to another AWS account

Hi everyone,We have a Databricks workspace in an AWS account that we need to migrate to a new AWS account.The workspace has a lot of managed tables, workflows, saved queries, notebooks which need to be migrated, so looking for an efficient approach t...

Data Engineering
AWS
Databricks Migration
migration
queries
Workflows
  • 3261 Views
  • 2 replies
  • 1 kudos
Latest Reply
katherine561
New Contributor II
  • 1 kudos

For a streamlined migration of your Databricks workspace from one AWS account to another, start by exporting notebook, workflow, and saved query configurations using Databricks REST API or CLI. Employ Deep Clone or Delta Sharing for managed table dat...

  • 1 kudos
1 More Replies
Luke_H
by New Contributor II
  • 4199 Views
  • 2 replies
  • 2 kudos

Resolved! Variable referencing in EXECUTE IMMEDIATE

Hi all,As part of an on-going exercise to refactor existing T-SQL code into Databricks, we've stumbled into an issue that we can't seem to overcome through Spark SQL.Currently we use dynamic SQL to loop through a number of tables, where we use parame...

Data Engineering
sql
Variables
  • 4199 Views
  • 2 replies
  • 2 kudos
Latest Reply
SergeRielau
Databricks Employee
  • 2 kudos

DECLARE OR REPLACE varfield_names1 STRING; SET VAR varfield_names1 = 'field1 STRING'; DECLARE OR REPLACE varsqlstring1 STRING; SET VAR varsqlstring1 = 'CREATE TABLE table1 (PrimaryKey STRING, Table STRING, ' || varfield_names1 || ')'; EXECUTE IMMEDI...

  • 2 kudos
1 More Replies
ksamborn
by New Contributor II
  • 6215 Views
  • 2 replies
  • 0 kudos

withColumnRenamed error on Unity Catalog 14.3 LTS

Hi -  We are migrating to Unity Catalog 14.3 LTS and have seen a change in behavior using withColumnRenamed.There is an error COLUMN_ALREADY_EXISTS on the join key, even though the column being renamed is a different column.   The joined DataFrame do...

Data Engineering
Data Lineage
Unity Catalog
  • 6215 Views
  • 2 replies
  • 0 kudos
Latest Reply
Palash01
Valued Contributor
  • 0 kudos

Hey @ksamborn I can think of 2 solutions:Rename the column in df_2 before joining: df_1_alias = df_1.alias("t1") df_2_alias = df_2.alias("t2") join_df = df_1_alias.join(df_2_alias, df_1_alias.key == df_2_alias.key) rename_df = join_df.withColumnRenam...

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels