cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

GeorgeD
by New Contributor
  • 662 Views
  • 1 replies
  • 0 kudos

Uncaught Error: Script error for jupyter-widgets/base

I have been using ipywidgets for quite a while in several notebooks in Databricks, but today things have completely stopped working with the following error;Uncaught Error: Script error for "@jupyter-widgets/base" http://requirejs.org/docs/errors.htm...

  • 662 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @GeorgeD,  First, ensure that you’re using a compatible version of JupyterLab with ipywidgets. As of ipywidgets 7.6 or newer, it should work seamlessly with JupyterLab 3.0 or newer without any additional steps.To verify if the necessary extension ...

  • 0 kudos
Edthehead
by New Contributor III
  • 1760 Views
  • 5 replies
  • 0 kudos

Incremental join transformation using Delta live tables

I'm attempting to build an incremental data processing pipeline using delta live tables. The aim to stream data from a source multiple times in a day and join the data within the specific increment only.I'm using autoloader to load the data increment...

pic.png
  • 1760 Views
  • 5 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

basically you want to do a stream-stream join.  If you want to do that you need to take a few things into account (see link).DLT might do this for you, but I never used it so I cannot confirm that.If your source tables are delta tables, you could ind...

  • 0 kudos
4 More Replies
Dean_Lovelace
by New Contributor III
  • 10340 Views
  • 10 replies
  • 2 kudos

How can I deploy workflow jobs to another databricks workspace?

I have created a number of workflows in the Databricks UI. I now need to deploy them to a different workspace.How can I do that?Code can be deployed via Git, but the job definitions are stored in the workspace only.

  • 10340 Views
  • 10 replies
  • 2 kudos
Latest Reply
Radhikad
New Contributor II
  • 2 kudos

Hello everyone, I need the same help from databricks expert. I have created a 'Job1' job with runtime 12.2 in 'Datbricks1' workspace. I have integrated with Azure repo and tried deploying in 'ENV1' using CI/CD pipeline. It is successfully deployed in...

  • 2 kudos
9 More Replies
ZacayDaushin
by New Contributor
  • 559 Views
  • 1 replies
  • 0 kudos

spline agent in Databricks use

spline Agent I use spline agent to get lineage of Databricks notebooks and for that i put the following code - attached to the notebook But i get the error attached%scalaimport scala.util.parsing.json.JSONimport za.co.absa.spline.harvester.SparkLinea...

  • 559 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

Could be me  but I do not see an error message?

  • 0 kudos
ktsoi
by New Contributor III
  • 1766 Views
  • 4 replies
  • 0 kudos

Resolved! INVALID_STATE: Storage configuration limit exceeded, only 11 storage configurations are allowed

Our team are trying to set up a new workspace (8th workspace), but failed to create the storage configurations required for the new workspace with an error of INVALID_STATE: Storage configuration limit exceeded, only 11 storage configurations are all...

  • 1766 Views
  • 4 replies
  • 0 kudos
Latest Reply
_Architect_
New Contributor II
  • 0 kudos

I solved the issue by simply going into Cloud Resources in Databricks console and navigated to "Credential Configuration" and "Storage Configuration" and deleted all the configurations which are not needed anymore(belongining to deleted workspaces)I ...

  • 0 kudos
3 More Replies
Arinjay
by New Contributor
  • 562 Views
  • 1 replies
  • 0 kudos

Can not add comment on table via create table statement

I am not able to add comment using this create table statement with as (query)

Arinjay_0-1711492175110.png
  • 562 Views
  • 1 replies
  • 0 kudos
Latest Reply
feiyun0112
Contributor III
  • 0 kudos

  CREATE TABLE [ IF NOT EXISTS ] table_identifier [ ( col_name1 col_type1 [ COMMENT col_comment1 ], ... ) ] USING data_source [ OPTIONS ( key1=val1, key2=val2, ... ) ] [ PARTITIONED BY ( col_name1, col_name2, ... ) ] [ CLUSTERED B...

  • 0 kudos
Haylyon
by New Contributor II
  • 5231 Views
  • 5 replies
  • 3 kudos

Missing 'DBAcademy DLT' as a Cluster Policy when creating Delta Live Tables pipeline

I am currently in the middle of the Data Engineering Associate course on the Databricks Partner Academy. I am on module 4 - "Build Data Pipelines with Delta Live Tables", and trying to complete the lab "DE 4.1 - DLT UI Walkthrough". I have successful...

  • 5231 Views
  • 5 replies
  • 3 kudos
Latest Reply
Kaniz
Community Manager
  • 3 kudos

Hi @Haylyon , We haven't heard from you since the last response from @SeRo , and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community as it can be helpful to others.   Also, p...

  • 3 kudos
4 More Replies
brian999
by New Contributor III
  • 957 Views
  • 3 replies
  • 0 kudos

Writing to Snowflake from Databricks - sqlalchemy replacement?

I am trying to migrate some complex python load processes into databricks. Our load processes currently use pandas and we're hoping to refactor into Spark soon. For now, I need to figure out how to alter our functions that get sqlalchemy connection e...

  • 957 Views
  • 3 replies
  • 0 kudos
Latest Reply
shan_chandra
Esteemed Contributor
  • 0 kudos

@brian999  -  spark-snowflake connector is inbuilt into the DBR. Please refer to the below article for examples.  https://docs.databricks.com/en/connect/external-systems/snowflake.html#read-and-write-data-from-snowflake Please let us know if this hel...

  • 0 kudos
2 More Replies
NanthakumarYoga
by New Contributor
  • 1250 Views
  • 1 replies
  • 1 kudos

Partition in Spark

Hi Community, Need your help on understanding below topics.. I have a huge transaction file ( 20GB ) partition by transaction_date column , parquet file. I have evenly distributed data ( no skew ). There are 10 days of data and we have 10 partition f...

  • 1250 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @NanthakumarYoga, Let’s delve into each of your questions about Spark and data partitioning: Data Partitioning and Parallel Processing: When you read a large Parquet file without any specific where condition (a simple read), Spark automaticall...

  • 1 kudos
MarinD
by New Contributor II
  • 1301 Views
  • 1 replies
  • 1 kudos

Resolved! CI/CD Databricks Asset Bundles - DLT pipelines - unity catalog and target schema

Is it possible for the CI/CD Databricks Asset Bundles YAML file to describe unity catalog and target schema as destination needed for the DLT pipeline? Or that's just not possible today.In case this functionality is not possible today, are there any ...

  • 1301 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @MarinD , As of now, Databricks Asset Bundles do not directly support specifying the Unity Catalog and target schema as the destination for a Delta Live Tables (DLT) pipeline within the YAML configuration file. However, let’s delve into the detai...

  • 1 kudos
kmodelew
by New Contributor II
  • 1060 Views
  • 2 replies
  • 1 kudos

Resolved! TaskSensor - check if task is succeded

Hi,I would like to check if the task within job is succeded (even the job is marked as failed because on of the tasks).I need to create dependency for tasks within other jobs. The case is that I have one job for loading all tables for one country. Re...

  • 1060 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @kmodelew,  Databricks Jobs now supports task orchestration, allowing you to run multiple tasks as a directed acyclic graph (DAG). This simplifies the creation, management, and monitoring of your data and machine learning workflows.You can easily ...

  • 1 kudos
1 More Replies
JoseMacedo
by New Contributor II
  • 549 Views
  • 3 replies
  • 0 kudos

How to cache on 500 billion rows

Hello!I'm using a server less SQL cluster on Data bricks and I have a dataset on Delta Table that has 500 billion rows. I'm trying to filter to have around 7 billion and the cache that dataset to use it on other queries and make it run faster.When I ...

  • 549 Views
  • 3 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

I missed the 'serverless sql' part.  CACHE is for spark, I don´t think it works for serverless sql.Here is how caching works on DBSQL.

  • 0 kudos
2 More Replies
yubin-apollo
by New Contributor II
  • 1667 Views
  • 4 replies
  • 0 kudos

COPY INTO skipRows FORMAT_OPTIONS does not work

Based on the COPY INTO documentation, it seems I can use `skipRows` to skip the first `n` rows. I am trying to load a CSV file where I need to skip a few first rows in the file. I have tried various combinations, e.g. setting header parameter on or ...

  • 1667 Views
  • 4 replies
  • 0 kudos
Latest Reply
karthik-kobai
New Contributor II
  • 0 kudos

@yubin-apollo: My bad - I had the skipRows in the COPY_OPTIONS and not in the FORMAT_OPTIONS. It works, please ignore my previous comment. Thanks

  • 0 kudos
3 More Replies
DataGeek_JT
by New Contributor II
  • 1846 Views
  • 1 replies
  • 0 kudos

[SQL_CONF_NOT_FOUND] The SQL config "/Volumes/xxx...." canot be found. Please verify that the confi

I am getting the below error when trying to stream data from Azure Storage path to a Delta Live Table ([PATH] is the path to my files which I have redacted here):[SQL_CONF_NOT_FOUND] The SQL config "/Volumes/[PATH]" cannot be found. Please verify tha...

  • 1846 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @DataGeek_JT,  Ensure that the path you’ve provided is correct. Double-check the path to make sure it points to the right location in your Azure Storage.If you’ve redacted the actual path, replace “[PATH]” with the actual path to your files.When w...

  • 0 kudos
brian999
by New Contributor III
  • 468 Views
  • 1 replies
  • 0 kudos

Global ini file to reference Databricks-backed secrets (not Azure)

Is there a way to create a global ini file that will reference databricks-backed secrets? Not from Azure, we use databricks on AWS.

  • 468 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @brian999, When working with Databricks on AWS, you can create a global initialization script that references Databricks-backed secrets. Let’s break down the steps: Create a Secret in a Databricks-Backed Scope: To create a secret, you can use...

  • 0 kudos
Labels
Top Kudoed Authors