cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

laurencewells
by New Contributor III
  • 4310 Views
  • 3 replies
  • 1 kudos

Resolved! Log4J Custom Filter Not Working

Hi All, Hoping you can help. I am looking to set up a custom logging process that captures application ETL logs and Streaming logs I have set up multiple custom logging appenders using the guide here: https://kb.databricks.com/clusters/overwrite-log4...

  • 4310 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hey there @Laurence Wells​ Hope you are doing great.Does @Kaniz Fatma​ 's response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?Thanks!

  • 1 kudos
2 More Replies
lizou
by Contributor II
  • 5796 Views
  • 1 replies
  • 1 kudos

Never use the float data type

select float('92233464567.33') returns 92,233,466,000I am expected result will be around 92,233,464,567.xxtherefore, float data type should be avoided.Use double or decimal works as expected. But I see float data type is widely used assuming most num...

image
  • 5796 Views
  • 1 replies
  • 1 kudos
Latest Reply
Prabakar
Databricks Employee
  • 1 kudos

Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type range can be represented exactly w...

  • 1 kudos
Krish-685291
by New Contributor III
  • 3046 Views
  • 6 replies
  • 2 kudos

Can I merge delta lake table to RDBMS table directly? Which is the preferred way in Databricks?

Hi,I am dealing with updating master data. I'll do the UPCERT operations on the delta lake table. But after my UPCERT is complete I like to update the master data on the RDBMS table also. Is there any support from Databricks to perform this operation...

  • 3046 Views
  • 6 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

I get your point and concerns.If there are plans in that direction, it will have to be a joint effort of Databricks + db vendor.

  • 2 kudos
5 More Replies
BenBauer
by New Contributor III
  • 1046 Views
  • 0 replies
  • 4 kudos

How to prevent creation of __apply_changes_* table creation during DLT create_target_table process

Hey, we are using DLT along with SCD I via the create_target_table function. It does actually not create the table as defined, but rather a view., however on top of the expected table we see system generated tables e.g.: __apply_changes_*Is there a w...

  • 1046 Views
  • 0 replies
  • 4 kudos
naveenmamidala
by New Contributor II
  • 23693 Views
  • 1 replies
  • 1 kudos

Error: ConnectionError: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: /api/2.0/workspace/list?path=%2F (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

Error: ConnectionError: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: /api/2.0/workspace/list?path=%2F (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001CAF52B4640>: Failed to establis...

  • 23693 Views
  • 1 replies
  • 1 kudos
Latest Reply
Sajith
New Contributor II
  • 1 kudos

Set HTTPS proxy sever in CLI and it started working without any errorset HTTPS_PROXY=http://username:password@{proxy host}:{port}

  • 1 kudos
rakeshdey
by New Contributor II
  • 2354 Views
  • 0 replies
  • 1 kudos

why providing list of filenames to spark.read.csv([file1,fiel2,file3]) is much faster than providing directory with wild card spark.read.csv("/path/*") ??

I have huge no of small files in s3 and I was going through few blog where people are telling that providing list of files is faster like (spark.read.csv([file1,file2,file3]) instead of giving directory with wild card Reason : Spark actually does fi...

  • 2354 Views
  • 0 replies
  • 1 kudos
Sunny
by New Contributor III
  • 1491 Views
  • 1 replies
  • 0 kudos

Update task status from external application

I am having a workflow with a task that is dependant on external application execution (not residing in Databricks). After external application finishes, how to update the status of a task to complete. Currently, Jobs API doesn't support status updat...

  • 1491 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sunny
New Contributor III
  • 0 kudos

Any inputs on this one please

  • 0 kudos
GC-James
by Contributor II
  • 13922 Views
  • 8 replies
  • 10 kudos

RserveException: eval failed

Sometimes when I am running R code in a Databricks notebook I am given this error. The cell I am running fails, and my whole R 'session' seems to get screwed up. For example my stored variables disappear, and I have to re-load my packages etc. It is ...

rserve_error
  • 13922 Views
  • 8 replies
  • 10 kudos
Latest Reply
data_warrior
New Contributor III
  • 10 kudos

The error file is attached here.

  • 10 kudos
7 More Replies
Braxx
by Contributor II
  • 5996 Views
  • 2 replies
  • 1 kudos

Resolved! delta table storage

I couldn't find it clearly explained anywhere, so hope sb here shed some light on that.Few questions:1) Where does delta tables are stored? Docs say: "Delta Lake uses versioned Parquet files to store your data in your cloud storage"So where exactly i...

  • 5996 Views
  • 2 replies
  • 1 kudos
Latest Reply
Braxx
Contributor II
  • 1 kudos

thanks, very helpful

  • 1 kudos
1 More Replies
sebg
by New Contributor II
  • 2952 Views
  • 0 replies
  • 1 kudos

Using (python) import on azure databricks

Hello,My team is currently working on azure databricks with a mid sized repo. When we wish to import pyspark functions and classes from other notebooks we currently use %run <relpath>which is less than ideal.I would like to replicate the functionalit...

image
  • 2952 Views
  • 0 replies
  • 1 kudos
MariusC
by New Contributor III
  • 10996 Views
  • 5 replies
  • 5 kudos

Resolved! Power BI with Databricks SQL Endpoint

Hello,We are trying to load a Delta table from an Azure Data Lake Storage container into Power BI using the Databricks SQL Endpoint.We configured the SQL Workspace data to have access to the ADLS Delta table and created a view; we are able to query t...

error screenshot sample_error sample_query ADLS_delta_query
  • 10996 Views
  • 5 replies
  • 5 kudos
Latest Reply
Atanu
Databricks Employee
  • 5 kudos

@Marius Condescu​ Could you please include below spark config and try-spark.hadoop.fs.azure.account.oauth.provider.type.ariaprime.dfs.core.windows.net org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProviderspark.hadoop.fs.azure.account.auth.typ...

  • 5 kudos
4 More Replies
Anonymous
by Not applicable
  • 1521 Views
  • 0 replies
  • 0 kudos

Data Visualized | Lego style

Something fun for your Friday! If you are a visual person like me, you may like this image that was recently shared in our internal Databricks slack instance. Who else ï§¡s Legos? If you have seen data all 6 ways with Databricks, give this a ï§¡ !!!

Image
  • 1521 Views
  • 0 replies
  • 0 kudos
Raymond_Garcia
by Contributor II
  • 1481 Views
  • 0 replies
  • 1 kudos

EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[elasticsearch_server:80]]

Hi, I wondered if some of you have had this issue before and how it can be solved. In a Databricks Job, we have a UBQ with a Painless script for ES. these are the options. Staging and prod are the same configurations, but Staging is failing with the ...

  • 1481 Views
  • 0 replies
  • 1 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels