cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shreyassharmabh
by New Contributor II
  • 2060 Views
  • 2 replies
  • 1 kudos

How to check programmatically job cluster is unity catalog enabled or not in databricks

Is there any way to check job cluster is unity catalog enabled or not in databricks using python.I tried with jobs api https://{host_name}/api/2.0/jobs/get?job_id={job_id}, but I didn't that cluster is unity catalog enabled or not.Could anyone sugges...

  • 2060 Views
  • 2 replies
  • 1 kudos
Latest Reply
KarenZak
New Contributor II
  • 1 kudos

To check if a job cluster is Unity catalog enabled in Databricks programmatically using Python, you can use the Databricks REST API. Here's an example of how you can do it:Import the required modules:import requestsSet up the necessary variables:host...

  • 1 kudos
1 More Replies
chorongs
by New Contributor III
  • 2117 Views
  • 2 replies
  • 1 kudos

Resolved! Sequential vs concurrency optimization questions from query!

Preparing for databricks eligibility!Is the content below correct?"If the queries are running sequentially then scale up (increase the size of the cluster from 2x small to 4x large)If the queries are running concurrently or with many users then scale...

  • 2117 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Scaling in Databricks involves two aspects: vertical scaling (scale up) and horizontal scaling (scale out). Vertical Scaling (Scale Up): If your queries are running sequentially, meaning one query at a time, and you want to improve performance for a...

  • 1 kudos
1 More Replies
kellybe
by New Contributor II
  • 3891 Views
  • 6 replies
  • 0 kudos

Databricks SQL format_string in LOCATION

Hi,I'm trying to assign a location to a new database in Databricks SQL. Normally I'd do this in Python since we specify storage account names from secret scopes, however I'm attempting to do all of this from a SQL warehouse. When doing this I seem to...

  • 3891 Views
  • 6 replies
  • 0 kudos
Latest Reply
pcbzmani
New Contributor II
  • 0 kudos

Hello @kellybe ,CREATE DATABASE IF NOT EXISTS new_database LOCATION format_string('abfss://container-name@%s.dfs.core.windows.net/', select SECRET('secret-scope', 'storage-account-name')); Add Select before secert 

  • 0 kudos
5 More Replies
kurt
by New Contributor
  • 695 Views
  • 0 replies
  • 0 kudos

DLT & Publishing to Feature Store

Hi,Is there an example of incorporating Databricks Feature Store into DLT pipelines?  Is this possible natively via a Python notebook part of the pipeline (FYI - docs say needs ML Runtime?).  If not completely DLT-able, what is the best current way t...

  • 695 Views
  • 0 replies
  • 0 kudos
Mbinyala
by New Contributor II
  • 17149 Views
  • 2 replies
  • 1 kudos

Connecting confluent to databricks.

Hi!!Can someone tell me how to connect the confluent cloud to Databricks? I am new to this so please elaborate on your answer.

  • 17149 Views
  • 2 replies
  • 1 kudos
Latest Reply
VaibB
Contributor
  • 1 kudos

You might want to watch this as well https://www.confluent.io/resources/online-talk/innovate-faster-and-easier-with-confluent-and-databricks-on-azure/?utm_medium=sem&utm_source=google&utm_campaign=ch.sem_br.nonbrand_tp.prs_tgt.dsa_mt.dsa_rgn.india_ln...

  • 1 kudos
1 More Replies
Gim
by Contributor
  • 1820 Views
  • 1 replies
  • 3 kudos

Columns with DEFAULT missing error during INSERT

I am really confused about the DEFAULT capability of Databricks SQL. I looked at the documentation for the minimum required DBR to get the capability yet we still need to enable it as a table property? I updated my cluster's DBR from 12.2 to 13.1.Any...

Gim_0-1688465259125.png
  • 1820 Views
  • 1 replies
  • 3 kudos
Latest Reply
BriceBuso
Contributor II
  • 3 kudos

Hello @Gim, Got the same problem. Tried with the instruction "GENERATED ALWAYS AS (CAST(CURRENT_DATE() AS DATE))" but code is returning "Error in SQL statement: DeltaAnalysisException: current_date() cannot be used in a generated column" If you find ...

  • 3 kudos
erigaud
by Honored Contributor
  • 2595 Views
  • 2 replies
  • 1 kudos

Incrementally load SQL Server table

I am accessing an on premise SQL Server table. The table is relatively small (10 000 rows), and I access it usingspark.read.jdbc(url=jdbcUrl, table = query)Every day there are new records in the on prem table that I would like to append in my bronze ...

  • 2595 Views
  • 2 replies
  • 1 kudos
Latest Reply
erigaud
Honored Contributor
  • 1 kudos

As I said, there is no unique identifier in the table that would allow me to do any sort of Join between my source table and my bronze table. 

  • 1 kudos
1 More Replies
JLL
by New Contributor II
  • 664 Views
  • 1 replies
  • 2 kudos

Shorten query run time

Challenges in query long run time; what are the recommended steps to improve performance 

  • 664 Views
  • 1 replies
  • 2 kudos
Latest Reply
erigaud
Honored Contributor
  • 2 kudos

The question needs more precision : is it the cluster startup that takes a while ? If yes, try serverless warehousesAre there many queries running in parallel and that is where you see a slow down ? Each cluster can only run 10 queries in parallel, s...

  • 2 kudos
christo_M
by New Contributor
  • 1572 Views
  • 4 replies
  • 0 kudos

Cost Optimization

How can I optimize the cost on our Databricks platform ? Despite some optimization actions I've taken so far it's still difficult to lower the cost. I tried different technics like Vacuum , or shutting down a cluster running after 30 mins but still d...

  • 1572 Views
  • 4 replies
  • 0 kudos
Latest Reply
erigaud
Honored Contributor
  • 0 kudos

Make sure you're using a cluster that is the right size for your workload. You can greatly reduce the costs by using smaller clusters.

  • 0 kudos
3 More Replies
krucial_koala
by New Contributor III
  • 3061 Views
  • 5 replies
  • 6 kudos

Extending DevOps Service Principal support?

As per the previous discussion:How to use Databricks Repos with a service principal for CI/CD in Azure DevOps?The recommendation was to create a DevOps PAT for the Service Principal and upload it to Databricks using the Git Credential API. The main f...

AAD auth error
  • 3061 Views
  • 5 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @James Baxter​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers ...

  • 6 kudos
4 More Replies
Fz1
by New Contributor III
  • 1208 Views
  • 0 replies
  • 0 kudos

DLT + Unity Catalogue Issue accessing Dataset not defined in the pipeline

I have 2 different schemas [silver and gold] under the same Unity Catalog.We are trying to incrementally ingest data in both silver and gold layers.The silver tables were created as streaming DLT tables using dlt.create_streaming_table(....) and the ...

Data Engineering
dataset
Dataset not defined in the pipeline
dlt
schema
Unity Catalog
  • 1208 Views
  • 0 replies
  • 0 kudos
Fz1
by New Contributor III
  • 1211 Views
  • 0 replies
  • 0 kudos

DLT with Unity Catalog pipeline not recognising tables from different schemas

I have 2 different schemas [silver and gold] under the same Unity Catalog.We are trying to incrementally ingest data in both silver and gold layers.The silver tables were created as streaming DLT tables using dlt.create_streaming_table(....) and the ...

Data Engineering
dataset
dlt
pipelines
schema
Unity Catalog
  • 1211 Views
  • 0 replies
  • 0 kudos
japan
by New Contributor III
  • 2482 Views
  • 7 replies
  • 11 kudos

Resolved! databricks

what new anounce is most interest for you  in DAIS 2023 ?

  • 2482 Views
  • 7 replies
  • 11 kudos
Latest Reply
BriceBuso
Contributor II
  • 11 kudos

Lakehouse AI, it's bringing lots of possibilities. 

  • 11 kudos
6 More Replies
Hongbo
by New Contributor III
  • 8414 Views
  • 2 replies
  • 4 kudos

Resolved! Delta table with Varchar column vs string column

Databricks support string data type. But I can still create delta table with varchar data type. Just wonder what is different between delta table with string and delta table with varchar:-- delta table with stringCREATE TABLE persons(first_name STRIN...

  • 8414 Views
  • 2 replies
  • 4 kudos
Latest Reply
erigaud
Honored Contributor
  • 4 kudos

VARCHAR allows you to specify the size of the string expected in the column. This is useful when you know your column cannot exceed a set size (ie for a name, a code etc).It is equivalent to a CHECK contraint on the size. Trying to insert a value tha...

  • 4 kudos
1 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels
Top Kudoed Authors