cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

LeoGaller
by New Contributor
  • 99 Views
  • 1 replies
  • 0 kudos

What are the options for "spark_conf.spark.databricks.cluster.profile"?

Hey guys, I'm trying to find what are the options we can pass to spark_conf.spark.databricks.cluster.profileI know looking around that some of the available configs are singleNode and serverless, but there are others?Where is the documentation of it?...

  • 99 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @LeoGaller , The spark_conf.spark.databricks.cluster.profile configuration in Databricks allows you to specify the profile for a cluster.   Let’s explore the available options and where you can find the documentation. Available Profiles: Sing...

  • 0 kudos
Red1
by New Contributor III
  • 957 Views
  • 8 replies
  • 2 kudos

Autoingest not working with Unity Catalog in DLT pipeline

Hey Everyone,I've built a very simple pipeline with a single DLT using auto ingest, and it works, provided I don't specify the output location. When I build the same pipeline but set UC as the output location, it fails when setting up S3 notification...

  • 957 Views
  • 8 replies
  • 2 kudos
Latest Reply
Red1
New Contributor III
  • 2 kudos

Hey @Babu_Krishnan I was! I had to reach out to my Databricks support engineer directly and the resolution was to add "cloudfiles.awsAccessKey" and "cloudfiles.awsSecretKey" to the params as in the screenshot below (apologies, i don't know why the sc...

  • 2 kudos
7 More Replies
Mado
by Valued Contributor II
  • 7679 Views
  • 4 replies
  • 2 kudos

Resolved! Using "Select Expr" and "Stack" to Unpivot PySpark DataFrame doesn't produce expected results

I am trying to unpivot a PySpark DataFrame, but I don't get the correct results.Sample dataset:# Prepare Data data = [("Spain", 101, 201, 301), \ ("Taiwan", 102, 202, 302), \ ("Italy", 103, 203, 303), \ ("China", 104, 204, 304...

image image
  • 7679 Views
  • 4 replies
  • 2 kudos
Latest Reply
lukeoz
New Contributor
  • 2 kudos

You can also use backticks around the column names that would otherwise be recognised as numbers.from pyspark.sql import functions as F   unpivotExpr = "stack(3, '2018', `2018`, '2019', `2019`, '2020', `2020`) as (Year, CPI)" unPivotDF = df.select("C...

  • 2 kudos
3 More Replies
amitkmaurya
by New Contributor
  • 78 Views
  • 1 replies
  • 0 kudos

How to increase executor memory in Databricks jobs

May be I am new to Databricks that's why I have confusion.Suppose I have worker memory of 64gb in Databricks job max 12 nodes...and my job is failing due to Executor Lost due to 137 (OOM if found on internet).So, to fix this I need to increase execut...

  • 78 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
New Contributor III
  • 0 kudos

Hello @amitkmaurya , Increasing compute resources may not always be the best strategy. To gain more insights into each executor's memory usage, check the cluster metrics tab and Spark UI for your cluster. If one executor has a much higher memory usag...

  • 0 kudos
Devsql
by New Contributor
  • 83 Views
  • 1 replies
  • 0 kudos

How to find that given Parquet file got imported into Bronze Layer ?

Hi Team,Recently we had created new Databricks project/solution (based on Medallion architecture) having Bronze-Silver-Gold Layer based tables. So we have created Delta-Live-Table based pipeline for Bronze-Layer implementation. Source files are Parqu...

Data Engineering
Azure Databricks
Bronze Job
Delta Live Table
Delta Live Table Pipeline
  • 83 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
New Contributor III
  • 0 kudos

Hello @Devsql , It appears that you are creating DLT bronze tables using a standard spark.read operation. This may explain why the DLT table doesn't include "new files" during a REFRESH operation. For incremental ingestion of bronze layer data into y...

  • 0 kudos
6502
by New Contributor III
  • 126 Views
  • 1 replies
  • 0 kudos

Delete on streaming table and starting startingVersion

I deleted for mistake some records from a streaming table, and of course, the streaming job stopped working. So I restored the table at the version before the delete was done, and attempted to restart the job using the startingVersion to the new vers...

  • 126 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
New Contributor III
  • 0 kudos

Hello @6502, It appears you've used the `startingVersion` parameter in your streaming query, which causes the stream to begin processing data from the version prior to the DELETE operation version. However, the DELETE operation will still be processe...

  • 0 kudos
nilton
by New Contributor
  • 264 Views
  • 2 replies
  • 0 kudos

Query table based on table_name from information_schema

Hi,I have one table that changes the name every 60 days. The name simple increases the number version, for example:* Firtst 60 days: table_name_v1. After 60 days: table_name_v2 and so on.What i want is to query the table wich name returned in the que...

  • 264 Views
  • 2 replies
  • 0 kudos
Latest Reply
radothede
New Contributor
  • 0 kudos

The simpliest way would be propably using spark.sql%py tbl_name = 'table_v1' df = spark.sql(f'select * from {tbl_name}') display(df) From there, You can simply create temporary view:%py df.createOrReplaceTempView('table_act')and query it using SQL st...

  • 0 kudos
1 More Replies
rt-slowth
by Contributor
  • 969 Views
  • 5 replies
  • 2 kudos

AutoLoader File notification mode Configuration with AWS

   from pyspark.sql import functions as F from pyspark.sql import types as T from pyspark.sql import DataFrame, Column from pyspark.sql.types import Row import dlt S3_PATH = 's3://datalake-lab/XXXXX/' S3_SCHEMA = 's3://datalake-lab/XXXXX/schemas/' ...

  • 969 Views
  • 5 replies
  • 2 kudos
Latest Reply
djhs
New Contributor III
  • 2 kudos

Was this resolved? I run into the same issue

  • 2 kudos
4 More Replies
185369
by New Contributor II
  • 888 Views
  • 4 replies
  • 1 kudos

Resolved! DLT with UC Access Denied sqs

I am going to use the newly released DLT with UC.But it keeps getting access denied. As I keep tracking the reasons, it seems that an account. ID other than my account ID or Databricks account ID is being requested.I cannot use '*' in principal attri...

  • 888 Views
  • 4 replies
  • 1 kudos
Latest Reply
Priyag1
Honored Contributor II
  • 1 kudos

Every service on AWS, an SQS queue, and all the other services in your stack using that queue will be configured with minimal permissions, leading to access issues. So, make sure you get your IAM policies set up correctly before deploying to producti...

  • 1 kudos
3 More Replies
israelst
by New Contributor II
  • 383 Views
  • 3 replies
  • 1 kudos

DLT can't authenticate with kinesis using instance profile

When running my notebook using personal compute with instance profile I am indeed able to readStream from kinesis. But adding it as a DLT with UC, while specifying the same instance-profile in the DLT pipeline setting - causes a "MissingAuthenticatio...

Data Engineering
Delta Live Tables
Unity Catalog
  • 383 Views
  • 3 replies
  • 1 kudos
Latest Reply
Mathias_Peters
New Contributor II
  • 1 kudos

Hi, were you able to solve this problem? If so, what was the solution?

  • 1 kudos
2 More Replies
brianbraunstein
by New Contributor
  • 153 Views
  • 1 replies
  • 0 kudos

spark.sql not supporting kwargs as documented

This documentation https://api-docs.databricks.com/python/pyspark/latest/pyspark.sql/api/pyspark.sql.SparkSession.sql.html#pyspark.sql.SparkSession.sql claims that spark.sql() should be able to take kwargs, such that the following should work:display...

  • 153 Views
  • 1 replies
  • 0 kudos
Latest Reply
brianbraunstein
New Contributor
  • 0 kudos

Ok, it looks like Databricks might have broken this functionality shortly after it came out: https://community.databricks.com/t5/data-engineering/parameterized-spark-sql-not-working/m-p/57969/highlight/true#M30972

  • 0 kudos
QuantumFries
by New Contributor
  • 146 Views
  • 4 replies
  • 3 kudos

Change {{job.start_time.[iso_date]}} Timezone

I am trying to schedule some jobs using workflows and leveraging dynamic variables. One caveat is that when I try to use {{job.start_time.[iso_date]}} it seems to be defaulted to UTC, is there a way to change it?

  • 146 Views
  • 4 replies
  • 3 kudos
Latest Reply
artsheiko
Valued Contributor III
  • 3 kudos

Hi, all the dynamic values are in UTC (documentation). Maybe you can use the code like the one presented below + pass the variables between tasks (see Share information between tasks in a Databricks job) ? %python from datetime import datetime, timed...

  • 3 kudos
3 More Replies
Abbe
by New Contributor II
  • 1111 Views
  • 2 replies
  • 0 kudos

Update data type of a column within a table that has a GENERATED ALWAYS AS IDENTITY-column

I want to cast the data type of a column "X" in a table "A" where column "ID" is defined as GENERATED ALWAYS AS IDENTITY. Databricks refer to overwrite to achieve this: https://docs.databricks.com/delta/update-schema.htmlThe following operation:(spar...

  • 1111 Views
  • 2 replies
  • 0 kudos
Latest Reply
RajuBolla
New Contributor
  • 0 kudos

Update is not working but delete is when i changed to DEFAULT property AnalysisException: UPDATE on IDENTITY column "XXXX_ID" is not supported.

  • 0 kudos
1 More Replies
curiousoctopus
by New Contributor II
  • 419 Views
  • 3 replies
  • 0 kudos

Run multiple jobs with different source code at the same time with Databricks asset bundles

Hi,I am migrating from dbx to databricks asset bundles. Previously with dbx I could work on different features in separate branches and launch jobs without issue of one job overwritting the other. Now with databricks asset bundles it seems like I can...

  • 419 Views
  • 3 replies
  • 0 kudos
Latest Reply
dattomo1893
New Contributor
  • 0 kudos

Any updates here?My team is migrating from dbx to DABs and we are running into the same issue. Ideally, we would like to deploy multiple, parametrized jobs from a single bundle. If this is not possible, we have to keep dbx.Thank you!

  • 0 kudos
2 More Replies
Phani1
by Valued Contributor
  • 199 Views
  • 4 replies
  • 0 kudos

Parallel execution of SQL cell in Databricks Notebooks

Hi Team,Please provide guidance on enabling SQL cells  parallel execution in a notebook containing multiple SQL cells. Currently, when we execute notebook and all the SQL cells they run sequentially. I would appreciate assistance on how to execute th...

  • 199 Views
  • 4 replies
  • 0 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 0 kudos

Hi @Phani1 Yes you can achieve this scenario with the help of Databricks Workflow jobs where you can create task and dependencies for each other. 

  • 0 kudos
3 More Replies
Labels
Top Kudoed Authors