cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

emanuelsh
by New Contributor
  • 599 Views
  • 0 replies
  • 0 kudos

Schema Evolution from Kafka Source

Hi,I have a Spark streaming process that reads data from a Kafka topic to Azure DLThis is how I implement the MERGE capability into the delta table.In addition to the same topic, I have another streaming process that simply writes data to DLIn kafka ...

  • 599 Views
  • 0 replies
  • 0 kudos
katedb
by New Contributor
  • 792 Views
  • 1 replies
  • 0 kudos

Clusters do not start - bootstrap timeout

Hello,Whenever I try to start any of already existing clusters, I get Bootstrap timeout error. In the logs, there are following messages:[Bootstrap Event] Can reach databricks-update-oregon.s3.us-west-2.amazonaws.com: [FAILED] [ 257.556698] audit: k...

Data Engineering
bootstrap
compute
  • 792 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16752239289
Valued Contributor
  • 0 kudos

The error message indicate the ec2 instance cannot access databricks-update-oregon.s3.us-west-2.amazonaws.com. Do you have s3 endpoint setup or can traffic route to databricks-update-oregon.s3.us-west-2.amazonaws.com ?

  • 0 kudos
NWIEFInance
by New Contributor
  • 756 Views
  • 1 replies
  • 0 kudos

Connect to EXCEL

I have hardtime connecting my existing EXCEL file to source data from DataBricks and need help

  • 756 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16539034020
Contributor II
  • 0 kudos

Hi, Thanks for contacting Databricks Support. We doesn't support direct Excel-Databricks connectivity. However, Databricks can be accessed through ODBC and JDBC interfaces, and we can leverage these with Excel's Power Query functionality for indirect...

  • 0 kudos
matanper
by New Contributor III
  • 2137 Views
  • 5 replies
  • 1 kudos

Custom docker image fails to initalize

I'm trying to use a custom docker image for my job. This is my docker file:FROM databricksruntime/standard:12.2-LTS COPY . . RUN /databricks/python3/bin/pip install -U pip RUN /databricks/python3/bin/pip install -r requirements.txt USER rootMy job ...

  • 2137 Views
  • 5 replies
  • 1 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 1 kudos

Hi, I think, disabling iptables will be better in this case, could you please try the below command and confirm? $ sudo iptables -S

  • 1 kudos
4 More Replies
Łukasz
by New Contributor III
  • 2308 Views
  • 6 replies
  • 5 kudos

Resolved! Dense rank possible bug

I have the case of deduplicating data source over specific business key using dense_rank function. Currently the data source does not have any duplicates, so the function should return 1 in all cases. The issue is that dense rank does not return prop...

  • 2308 Views
  • 6 replies
  • 5 kudos
Latest Reply
saipujari_spark
Valued Contributor
  • 5 kudos

Hey @Łukasz Thanks for reporting.As I see Spark 3.4.0 introduced an improvement that looks to be the cause for this issue.Improvement: https://issues.apache.org/jira/browse/SPARK-37099Similar Bug: https://issues.apache.org/jira/browse/SPARK-44448This...

  • 5 kudos
5 More Replies
415963
by New Contributor II
  • 1662 Views
  • 3 replies
  • 2 kudos

Not able to catch structured streaming exception

I would like to catch and handle an exception in a structured streaming job.The databricks notebook still displays the exception, regardless of added exception handling (see attached screenshot)I guess that the exception is displayed by the cell outp...

  • 1662 Views
  • 3 replies
  • 2 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 2 kudos

Hi, I understand, could you please also provide the last line of the error after scrolling down in the notebook cell? 

  • 2 kudos
2 More Replies
Retko
by Contributor
  • 4281 Views
  • 4 replies
  • 2 kudos

Running Command is often stuck on "Running Command..."

Hi,when running command, it often gets stuck and message below it says: "Running Command..."What can I do with it besides of restarting cluster?Also tried reattaching and clearing state, but no help here.Thanks

  • 4281 Views
  • 4 replies
  • 2 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 2 kudos

Hi, do you see this while running a command in the notebook? Please tag @Debayan  with your next comment which will notify me. Thanks!

  • 2 kudos
3 More Replies
DennisB
by New Contributor III
  • 2021 Views
  • 4 replies
  • 2 kudos

Resolved! Better Worker Node Core Utilisation

Hi everyone,Hoping someone can help me with this problem. I have an embarrassingly parallel workload, which I'm parallelising over 4 worker nodes (of type Standard_F4, so 4 cores each). Each workload is single-threaded, so I believe that only one cor...

  • 2021 Views
  • 4 replies
  • 2 kudos
Latest Reply
DennisB
New Contributor III
  • 2 kudos

So I managed to get the 1-core-per-executor working successfully. The bit that wasn't working was spark.executor.memory -- this was too high, but lowering it so that the sum of the executors memory was ~90% of the worker node's memory allowed it to w...

  • 2 kudos
3 More Replies
MadrasSenpai
by New Contributor II
  • 903 Views
  • 3 replies
  • 2 kudos

How to install cmdstanpy in dbx cluster

I have built an HMC model using cmdstand. In my local machine, I have install cmdstan for the following approach. import cmdstanpy cmdstanpy.install_cmdstan()But in Databricks I need to reinstall it every time when I train a new model, from the noteb...

  • 903 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Rajamannar Aanjaram Krishnamoorthy​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.

  • 2 kudos
2 More Replies
sarguido
by New Contributor II
  • 1471 Views
  • 4 replies
  • 2 kudos

Delta Live Tables: bulk import of historical data?

Hello! I'm very new to working with Delta Live Tables and I'm having some issues. I'm trying to import a large amount of historical data into DLT. However letting the DLT pipeline run forever doesn't work with the database we're trying to import from...

  • 1471 Views
  • 4 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Sarah Guido​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 2 kudos
3 More Replies
NWIEFInance
by New Contributor
  • 449 Views
  • 1 replies
  • 2 kudos

Connect to EXCEL

> I have hard time connecting to Excel, any help connecting Data Bricks to EXCEL

  • 449 Views
  • 1 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

Hi @NWIEFInance, This article describes using the Databricks ODBC driver to connect Databricks to Microsoft Excel. After establishing the connection, you can access the data in Databricks from Excel. You can also use Excel to analyze the data further...

  • 2 kudos
Priyag1
by Honored Contributor II
  • 957 Views
  • 2 replies
  • 11 kudos

Query parameters in dashboardsQueries can optionally leverage parameters or static values. When a visualization based on a parameterized query is adde...

Query parameters in dashboardsQueries can optionally leverage parameters or static values. When a visualization based on a parameterized query is added to a dashboard, the visualization can either be configured to use a:Widget parameterWidget paramet...

  • 957 Views
  • 2 replies
  • 11 kudos
Latest Reply
Natalie_NL
New Contributor II
  • 11 kudos

Hi, I build a dashboard with dashboard parameters, it works pretty easy!The advantage of dashboard parameters is that you do not have to set a default (it can be: all). This is convenient when you need to filter on values that change every time the q...

  • 11 kudos
1 More Replies
The_raj
by New Contributor
  • 2449 Views
  • 1 replies
  • 2 kudos

Error while reading file <file path>. [DEFAULT_FILE_NOT_FOUND]

Hi,I have a workflow created where there are 5 notebooks in it. One of the notebooks is failing with below error. I have tried refreshing the table. Still facing the same issue. When I try to run the notebook manually, it works fine. Can someone plea...

  • 2449 Views
  • 1 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

Hi @The_raj ,  The error message you are encountering indicates a failure during the execution of a Spark job on Databricks. Specifically, it seems that Task 736 in Stage 92.0 failed multiple times, and the most recent loss was due to a "DEFAULT_FILE...

  • 2 kudos
mickniz
by Contributor
  • 14319 Views
  • 7 replies
  • 18 kudos

cannot import name 'sql' from 'databricks'

I am working on Databricks version 10.4 premium cluster and while importing sql from databricks module I am getting below error. cannot import name 'sql' from 'databricks' (/databricks/python/lib/python3.8/site-packages/databricks/__init__.py).Trying...

  • 14319 Views
  • 7 replies
  • 18 kudos
Latest Reply
wallystart
New Contributor II
  • 18 kudos

I resolve the same error installing library from cluster interface (UI)

  • 18 kudos
6 More Replies
dvmentalmadess
by Valued Contributor
  • 1088 Views
  • 3 replies
  • 0 kudos

Ingestion Time Clustering on initial load

We are migrating our data into Databricks and I was looking at the recommendations for partitioning here: https://docs.databricks.com/tables/partitions.html. This recommends not specifying partitioning and allowing "Ingestion Time Partitioning" (ITP)...

  • 1088 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @dvmentalmadess  Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.  We'd love to hear from you. T...

  • 0 kudos
2 More Replies
Labels
Top Kudoed Authors