cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Pragan
by New Contributor
  • 4655 Views
  • 3 replies
  • 1 kudos

Resolved! Cluster doesn't support Photon with Docker Image enabled

I enabled Photon 9.1 LTS DBR in cluster that was already using Docker Image of the latest version, when I ran a SQL QUery using my cluster, I could not see any Photon engine working in my executor that should be actually running in Photon Engine.When...

  • 4655 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hello @Praganessh S​ , Photon is currently in Public Preview. The only way to use it is to explicitly run Databricks-provide Runtime images which contain it. Please see: https://docs.databricks.com/runtime/photon.html#databricks-clustersandhttps://do...

  • 1 kudos
2 More Replies
SimonY
by New Contributor III
  • 4718 Views
  • 3 replies
  • 3 kudos

Resolved! Trigger.AvailableNow does not support maxOffsetsPerTrigger in Databricks runtime 10.3

Hello,I ran a spark stream job to ingest data from kafka to test Trigger.AvailableNow.What's environment the job run ?1: Databricks runtime 10.32: Azure cloud3: 1 Driver node + 3 work nodes( 14GB, 4core)val maxOffsetsPerTrigger = "500"spark.conf.set...

  • 4718 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

You'd be better off with 1 node with 12 cores than 3 nodes with 4 each. You're shuffles are going to be much better one 1 machine.

  • 3 kudos
2 More Replies
fermin_vicente
by New Contributor III
  • 8807 Views
  • 7 replies
  • 4 kudos

Resolved! Can secrets be retrieved only for the scope of an init script?

Hi there, if I set any secret in an env var to be used by a cluster-scoped init script, it remains available for the users attaching any notebook to the cluster and easily extracted with a print.There's some hint in the documentation about the secret...

  • 8807 Views
  • 7 replies
  • 4 kudos
Latest Reply
pavan_kumar
Databricks Employee
  • 4 kudos

@Fermin Vicente​ good to know that this approach is working well. but please make sure that you use this approach at the end of your init script only

  • 4 kudos
6 More Replies
Hubert-Dudek
by Databricks MVP
  • 1495 Views
  • 1 replies
  • 19 kudos

Runtime 10.4 is available and is LTS. From today it is not beta anymore and it is LTS! mean Long Time Support. So for sure it will be with us for next...

Runtime 10.4 is available and is LTS.From today it is not beta anymore and it is LTS! mean Long Time Support. So for sure it will be with us for next 2 years.10.4 includes some awesome features like:Auto Compaction rollbacks are now enabled by defaul...

  • 1495 Views
  • 1 replies
  • 19 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 19 kudos

I have the same favorite.I am curious how it works under the hood. zipWithIndex?

  • 19 kudos
Hubert-Dudek
by Databricks MVP
  • 25683 Views
  • 23 replies
  • 36 kudos

Resolved! SparkFiles - strange behavior on Azure databricks (runtime 10)

When you use:from pyspark import SparkFiles spark.sparkContext.addFile(url)it adds file to NON dbfs /local_disk0/ but then when you want to read file:spark.read.json(SparkFiles.get("file_name"))it wants to read it from /dbfs/local_disk0/. I tried als...

  • 25683 Views
  • 23 replies
  • 36 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 36 kudos

I confirm that as @Arvind Ravish​ said adding file:/// is solving the problem.

  • 36 kudos
22 More Replies
Vegard_Stikbakk
by New Contributor II
  • 3862 Views
  • 1 replies
  • 3 kudos

Resolved! External functions on a SQL endpoint

want to create an external function using CREATE FUNCTION (External) and expose it to users of my SQL endpoint. Although this works from a SQL notebook, if I try to use the function from a SQL endpoint, I get "User defined expression is not supporte...

Screenshot 2022-03-24 at 21.32.59
  • 3862 Views
  • 1 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 3 kudos

It is separated runtime https://docs.databricks.com/sql/release-notes/index.html#channels so it seems that it is not yet supported. There is CREATE FUNCTION documentation but it seems that it is support only SQL syntax https://docs.databricks.com/sql...

  • 3 kudos
dataguy73
by New Contributor
  • 3873 Views
  • 1 replies
  • 1 kudos

Resolved! spark properties files

I am trying to migrate a spark job from an on-premises Hadoop cluster to data bricks on azure. Currently, we are keeping many values in the properties file. When executing spark-submit we pass the parameter --properties /prop.file.txt. and inside t...

  • 3873 Views
  • 1 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

I use JSON files and .conf files which reside on the data lake or in the filestore of dbfs.Then read those files using python/scala

  • 1 kudos
BasavarajAngadi
by Contributor
  • 2546 Views
  • 1 replies
  • 4 kudos

Resolved! Hi Experts : I am new to Databricks please help me on below. Question : How is delta table stored in DBFS ?

If I create delta table the table is stored in parque format in DBFS location ? and please share how the parque files supports schema evolution if i do DML operation.As per my understanding : we read data from data lake first in data frame and try to...

  • 2546 Views
  • 1 replies
  • 4 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 4 kudos

delta lake is parquet on steroids. The actual data is stored in parquet files, but you get a bunch of extra functionalities (time traveling, ACID, optimized writes, MERGE etc).check this page for lots of info.Delta lake does support schema evolution...

  • 4 kudos
BarakHav
by New Contributor II
  • 1899 Views
  • 0 replies
  • 3 kudos

Automatically Vacuuming a Delta Table on Databricks

Hi all,I've recently checked my bucket size on AWS and saw that it's size doesn't make much sense. I decided to vacuum each delta table with 2 weeks of retention. That shrunk the data from 30TB to around 5TB, though I was wondering, shouldn't default...

  • 1899 Views
  • 0 replies
  • 3 kudos
Gvsmao
by New Contributor III
  • 12449 Views
  • 7 replies
  • 3 kudos

Resolved! SQL Databricks - Spot VMs (Cost Optimized)

Hello! I want to ask a question please!Referring to Spot VMs with the "Cost Optimized" setting:In the case of Endpoint X-Small, which are 2 workers, if I send 10 simultaneous queries and a worker is evicted, can I have an error in any of these querie...

image
  • 12449 Views
  • 7 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Thanks for the information, I will try to figure it out for more. Keep sharing such informative post.  www.mygroundbiz.com

  • 3 kudos
6 More Replies
Krish123
by New Contributor
  • 2029 Views
  • 0 replies
  • 0 kudos

mount a Azure DL in Databricks

Hello Team,I am quite new to Databricks and I am learning PySpark and Databricks. I am trying to mount a DL Gen2 in Databricks, as part of that I had created app registration, added DL into app registration permissions, created a secret and also adde...

  • 2029 Views
  • 0 replies
  • 0 kudos
Hubert-Dudek
by Databricks MVP
  • 2034 Views
  • 0 replies
  • 24 kudos

Delta time travel - recover unconditionally delete Recovery is a great feature of the delta. Let's check with a real example of how recovery optio...

Delta time travel - recover unconditionally deleteRecovery is a great feature of the delta. Let's check with a real example of how recovery option work.Please watch my new youtube video about that topic.https://www.youtube.com/watch?v=TrUT6pvFKic 

imagen.png
  • 2034 Views
  • 0 replies
  • 24 kudos
shan_chandra
by Databricks Employee
  • 5597 Views
  • 1 replies
  • 2 kudos

Resolved! java.lang.ArithmeticException: Casting XXXXXXXXXXX to int causes overflow

My job started failing with the below error when inserting rows into a delta table. ailing with the below error when inserting rows (timestamp) to a delta table, it was working well before.java.lang.ArithmeticException: Casting XXXXXXXXXXX to int cau...

  • 5597 Views
  • 1 replies
  • 2 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 2 kudos

This is because the Integer type represents 4-byte signed integer numbers. The range of numbers is from -2147483648 to 2147483647.Kindly use double as the data type to insert the "2147483648" value in the delta table.In the below example, The second ...

  • 2 kudos
Bency
by New Contributor III
  • 3092 Views
  • 3 replies
  • 2 kudos

Invalid field schema option provided-DatabricksDeltaLakeSinkConnector

I have configured a Delta Lake Sink connector which reads from an AVRO topic and writes to the Delta lake . I have followed the docs and my config looks like below .  { "name": "dev_test_delta_connector", "config": {  "topics": "dl_test_avro",  "inp...

  • 3092 Views
  • 3 replies
  • 2 kudos
Latest Reply
Bency
New Contributor III
  • 2 kudos

@Hubert Dudek​ , Should I be configuring anything with respect to schema in the connector config ? Because I did successfully stage some data from another topic of a different format(JSON_SR) into delta lake table , but its with AVRO topic that I ge...

  • 2 kudos
2 More Replies
User16826992666
by Databricks Employee
  • 4012 Views
  • 2 replies
  • 1 kudos

Resolved! As an admin of a Databricks SQL environment, can I cancel long running queries?

I don't want one long or poorly written query to block my entire SQL endpoint for everyone else. Do I have the ability to kill specific queries?

  • 4012 Views
  • 2 replies
  • 1 kudos
Latest Reply
DevB
New Contributor II
  • 1 kudos

Is there a way to stop the session programmatically? like "kill session_id" or something similar in API?

  • 1 kudos
1 More Replies
Labels