cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Brad
by Contributor II
  • 1495 Views
  • 1 replies
  • 1 kudos

Colon sign operator for JSON

Hi,I have a streaming source loading data to a raw table, which has a string type col (whose value is JSON) to hold all data. I want to use colon sign operator to get fields from the JSON string. Is this going to have some perf issues vs. I use a sch...

  • 1495 Views
  • 1 replies
  • 1 kudos
Latest Reply
Brad
Contributor II
  • 1 kudos

Thanks Kaniz.Yes, I did some testing. With some schema, I read the same data source and write the parsing results to diff tables. For 586K rows, the perf diff is 9sec vs. 37sec. For 2.3 million rows, 16sec vs. 133sec. 

  • 1 kudos
vemash
by New Contributor
  • 2146 Views
  • 1 replies
  • 0 kudos

How to create a docker image to deploy and run in different environments in databricks?

I am new to databricks, and trying to implement below task.Task:Once code merges to main branch and build is successful  CI pipeline and all tests are passed, docker build should start and create a docker image and push to different environments (fro...

  • 2146 Views
  • 1 replies
  • 0 kudos
Latest Reply
MichTalebzadeh
Valued Contributor
  • 0 kudos

Hi,This is no different for building docker image for various environmentsLet us try a simple high level CI/CD pipeline for building Docker images and deploying them to different environments:. It works in all environments including Databricks     ...

  • 0 kudos
Stellar
by New Contributor II
  • 1766 Views
  • 0 replies
  • 0 kudos

DLT DatePlane Error

Hi everyone,I am trying to build the pipeline but when I run it I receive an errorDataPlaneException: Failed to start the DLT service on the cluster. Please check the driver logs for more details or contact Databricks support.This is from the driver ...

  • 1766 Views
  • 0 replies
  • 0 kudos
Surya0
by New Contributor III
  • 5391 Views
  • 3 replies
  • 0 kudos

Resolved! Unit hive-metastore.service not found

Hi Everyone,I've encountered an issue while trying to make use of the hive-metastore capability in Databricks to create a new database and table for our latest use case. The specific command I used was "create database if not exists newDB". However, ...

  • 5391 Views
  • 3 replies
  • 0 kudos
Latest Reply
rakeshprasad1
New Contributor III
  • 0 kudos

@Surya0 : i am facing same issue. stack trace is  Could not connect to address=(host=consolidated-northeuropec2-prod-metastore-2.mysql.database.azure.com)(port=3306)(type=master) : Socket fail to connect to host:consolidated-northeuropec2-prod-metast...

  • 0 kudos
2 More Replies
alexgv12
by New Contributor III
  • 1402 Views
  • 1 replies
  • 0 kudos

how to deploy sql functions in pool

we have some function definitions which we have to have available for our bi tools e.g.  CREATE FUNCTION CREATEDATE(year INT, month INT, day INT) RETURNS DATE RETURN make_date(year, month, day); how can we always have this function definition in our ...

  • 1402 Views
  • 1 replies
  • 0 kudos
Latest Reply
alexgv12
New Contributor III
  • 0 kudos

looking at some alternatives with other databricks components, I think that a CI/CD process should be created where the view can be created through the databricks api. https://docs.databricks.com/api/workspace/functions/createhttps://community.databr...

  • 0 kudos
dbal
by New Contributor III
  • 5393 Views
  • 2 replies
  • 0 kudos

Resolved! Spark job task fails with "java.lang.NoClassDefFoundError: org/apache/spark/SparkContext$"

Hi.I am trying to run a Spark Job in Databricks (Azure) using the JAR type.I can't figure out why the job fails to run by not finding the SparkContext.Databricks Runtime: 14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12)Error message: java.lang.NoCl...

  • 5393 Views
  • 2 replies
  • 0 kudos
Latest Reply
dbal
New Contributor III
  • 0 kudos

Update 2: I found the reason in the documentation. This is documented under "Access Mode", and it is a limitation of the Shared access mode.Link: https://learn.microsoft.com/en-us/azure/databricks/compute/access-mode-limitations#spark-api-limitations...

  • 0 kudos
1 More Replies
Tam
by New Contributor III
  • 1759 Views
  • 1 replies
  • 0 kudos

TABLE_REDIRECTION_ERROR in AWS Athena After Databricks Upgrade to 14.3 LTS

I have a Databricks pipeline set up to create Delta tables on AWS S3, using Glue Catalog as the Metastore. I was able to query the Delta table via Athena successfully. However, after upgrading Databricks Cluster from 13.3 LTS to 14.3 LTS, I began enc...

Tam_1-1707445843989.png
  • 1759 Views
  • 1 replies
  • 0 kudos
Coders
by New Contributor II
  • 2691 Views
  • 1 replies
  • 0 kudos

How to do perform deep clone for data migration from one Datalake to another?

 I'm attempting to migrate data from Azure Data Lake to S3 using deep clone. The data in the source Data Lake is stored in Parquet format and partitioned. I've tried to follow the documentation from Databricks, which suggests that I need to register ...

  • 2691 Views
  • 1 replies
  • 0 kudos
chakradhar545
by New Contributor
  • 1007 Views
  • 0 replies
  • 0 kudos

DatabricksThrottledException Error

Hi,Our scheduled job runs into below error once in a while and job fails. Any leads or thoughts please why we run into this once in a while and how to fix it pleaseshaded.databricks.org.apache.hadoop.fs.s3a.DatabricksThrottledException: Instantiate s...

  • 1007 Views
  • 0 replies
  • 0 kudos
Poonam17
by New Contributor II
  • 1182 Views
  • 1 replies
  • 2 kudos

Not able to deploy cluster in databricks community edition

 Hello team, I am not able to launch databricks cluster in community edition. automatically its getting terminated. Can someone please help here ? Regards.,poonam

IMG_6296.jpeg
  • 1182 Views
  • 1 replies
  • 2 kudos
Latest Reply
kakalouk
New Contributor II
  • 2 kudos

I face the exact same problem. The message i get is this:"Bootstrap Timeout:Node daemon ping timeout in 780000 ms for instance i-062042a9d4be8725e @ 10.172.197.194. Please check network connectivity between the data plane and the control plane."

  • 2 kudos
yatharth
by New Contributor III
  • 1173 Views
  • 1 replies
  • 0 kudos

LZO codec not working for graviton instances

Hi databricks:I have a job where I am saving my data in json format lzo compressed which requires the library lzo-codecon shifting to graviton instances I noticed that the same job started throwing exceptionCaused by: java.lang.RuntimeException: nati...

  • 1173 Views
  • 1 replies
  • 0 kudos
Latest Reply
yatharth
New Contributor III
  • 0 kudos

For more context, Please use the following code to replicate the error:# Create a Python list containing JSON objectsjson_data = [    {        "id": 1,        "name": "John",        "age": 25    },    {        "id": 2,        "name": "Jane",        "...

  • 0 kudos
Serhii
by Contributor
  • 10174 Views
  • 7 replies
  • 4 kudos

Resolved! Saving complete notebooks to GitHub from Databricks repos.

When saving notebook to GiHub repo, it is stripped to Python source code. Is it possible to save it in the ipynb formt?

  • 10174 Views
  • 7 replies
  • 4 kudos
Latest Reply
GlennStrycker
New Contributor III
  • 4 kudos

When I save+commit+push my .ipynb file to my linked git repo, I noticed that only the cell inputs are saved, not the output.  This differs from the .ipynb file I get when I choose "File / Export / iPython Notebook".  Is there a way to save the cell o...

  • 4 kudos
6 More Replies
GlennStrycker
by New Contributor III
  • 3157 Views
  • 1 replies
  • 0 kudos

Resolved! Saving ipynb notebooks to git does not include output cells -- differs from export

When I save+commit+push my .ipynb file to my linked git repo, I noticed that only the cell inputs are saved, not the output.  This differs from the .ipynb file I get when I choose "File / Export / iPython Notebook".  Is there a way to save the cell o...

  • 3157 Views
  • 1 replies
  • 0 kudos
Latest Reply
GlennStrycker
New Contributor III
  • 0 kudos

I may have figured this out.  You need to allow output in the settings, which will add a .databricks file to your repo, then you'll need to edit the options on your notebook and/or edit the .databricks file to allow all outputs.

  • 0 kudos
YS1
by Contributor
  • 3323 Views
  • 1 replies
  • 0 kudos

ModuleNotFoundError: No module named 'pulp'

Hello,I'm encountering an issue while running a notebook that utilizes the Pulp library. The library is installed in the first cell of the notebook. Occasionally, I encounter the following error:  org.apache.spark.SparkException: Job aborted due to s...

Data Engineering
Data_Engineering
module_not_found
  • 3323 Views
  • 1 replies
  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels