cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

DennisB
by New Contributor III
  • 2824 Views
  • 4 replies
  • 2 kudos

Resolved! Better Worker Node Core Utilisation

Hi everyone,Hoping someone can help me with this problem. I have an embarrassingly parallel workload, which I'm parallelising over 4 worker nodes (of type Standard_F4, so 4 cores each). Each workload is single-threaded, so I believe that only one cor...

  • 2824 Views
  • 4 replies
  • 2 kudos
Latest Reply
DennisB
New Contributor III
  • 2 kudos

So I managed to get the 1-core-per-executor working successfully. The bit that wasn't working was spark.executor.memory -- this was too high, but lowering it so that the sum of the executors memory was ~90% of the worker node's memory allowed it to w...

  • 2 kudos
3 More Replies
MadrasSenpai
by New Contributor II
  • 1352 Views
  • 3 replies
  • 2 kudos

How to install cmdstanpy in dbx cluster

I have built an HMC model using cmdstand. In my local machine, I have install cmdstan for the following approach. import cmdstanpy cmdstanpy.install_cmdstan()But in Databricks I need to reinstall it every time when I train a new model, from the noteb...

  • 1352 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Rajamannar Aanjaram Krishnamoorthy​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.

  • 2 kudos
2 More Replies
sarguido
by New Contributor II
  • 2215 Views
  • 4 replies
  • 2 kudos

Delta Live Tables: bulk import of historical data?

Hello! I'm very new to working with Delta Live Tables and I'm having some issues. I'm trying to import a large amount of historical data into DLT. However letting the DLT pipeline run forever doesn't work with the database we're trying to import from...

  • 2215 Views
  • 4 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Sarah Guido​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 2 kudos
3 More Replies
NWIEFInance
by New Contributor
  • 714 Views
  • 1 replies
  • 2 kudos

Connect to EXCEL

> I have hard time connecting to Excel, any help connecting Data Bricks to EXCEL

  • 714 Views
  • 1 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @NWIEFInance, This article describes using the Databricks ODBC driver to connect Databricks to Microsoft Excel. After establishing the connection, you can access the data in Databricks from Excel. You can also use Excel to analyze the data further...

  • 2 kudos
Priyag1
by Honored Contributor II
  • 1495 Views
  • 2 replies
  • 11 kudos

Query parameters in dashboardsQueries can optionally leverage parameters or static values. When a visualization based on a parameterized query is adde...

Query parameters in dashboardsQueries can optionally leverage parameters or static values. When a visualization based on a parameterized query is added to a dashboard, the visualization can either be configured to use a:Widget parameterWidget paramet...

  • 1495 Views
  • 2 replies
  • 11 kudos
Latest Reply
Natalie_NL
New Contributor II
  • 11 kudos

Hi, I build a dashboard with dashboard parameters, it works pretty easy!The advantage of dashboard parameters is that you do not have to set a default (it can be: all). This is convenient when you need to filter on values that change every time the q...

  • 11 kudos
1 More Replies
The_raj
by New Contributor
  • 3329 Views
  • 1 replies
  • 2 kudos

Error while reading file <file path>. [DEFAULT_FILE_NOT_FOUND]

Hi,I have a workflow created where there are 5 notebooks in it. One of the notebooks is failing with below error. I have tried refreshing the table. Still facing the same issue. When I try to run the notebook manually, it works fine. Can someone plea...

  • 3329 Views
  • 1 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @The_raj ,  The error message you are encountering indicates a failure during the execution of a Spark job on Databricks. Specifically, it seems that Task 736 in Stage 92.0 failed multiple times, and the most recent loss was due to a "DEFAULT_FILE...

  • 2 kudos
mickniz
by Contributor
  • 18042 Views
  • 7 replies
  • 18 kudos

cannot import name 'sql' from 'databricks'

I am working on Databricks version 10.4 premium cluster and while importing sql from databricks module I am getting below error. cannot import name 'sql' from 'databricks' (/databricks/python/lib/python3.8/site-packages/databricks/__init__.py).Trying...

  • 18042 Views
  • 7 replies
  • 18 kudos
Latest Reply
wallystart
New Contributor II
  • 18 kudos

I resolve the same error installing library from cluster interface (UI)

  • 18 kudos
6 More Replies
dvmentalmadess
by Valued Contributor
  • 1448 Views
  • 3 replies
  • 0 kudos

Ingestion Time Clustering on initial load

We are migrating our data into Databricks and I was looking at the recommendations for partitioning here: https://docs.databricks.com/tables/partitions.html. This recommends not specifying partitioning and allowing "Ingestion Time Partitioning" (ITP)...

  • 1448 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @dvmentalmadess  Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.  We'd love to hear from you. T...

  • 0 kudos
2 More Replies
RamozanbekS
by New Contributor III
  • 1361 Views
  • 1 replies
  • 0 kudos

Resolved! Databricks SQL Statement Execution API

I'm trying to follow the example provided here https://github.com/databricks-demos/dbsql-rest-api/blob/main/python/external_links.pyIt fails when it comes to downloading the data chunks. The statement status turns from SUCCEEDED to CLOSED right away ...

  • 1361 Views
  • 1 replies
  • 0 kudos
Latest Reply
RamozanbekS
New Contributor III
  • 0 kudos

It turns out that if the response is small and can fit 16mb limit, then status check will also provide single external link to download the data.So I need a condition here. Maybe even something like thisif len(chunks) == 1: external_url = respons...

  • 0 kudos
Asterol
by New Contributor III
  • 936 Views
  • 1 replies
  • 1 kudos

Creating a test schema - what is the best practice?

Hey, I've created a schema with few tables with historical data (prod), now I would like to have a Dev/testing environment with exactly the same data.What do you recommend? CTAS? Shallow clone? Deep clone? I wonder if shallow clone would be sufficien...

Data Engineering
Clone ctas
  • 936 Views
  • 1 replies
  • 1 kudos
Latest Reply
Tharun-Kumar
Honored Contributor II
  • 1 kudos

@Asterol If you would like to have the same data for your Dev/testing environment, I would recommend using Deep Clone. Deep clone copies the metadata and creates an independent copy of the table data. Shallow clone only copies the metadata and will h...

  • 1 kudos
NathanSundarara
by Contributor
  • 1151 Views
  • 0 replies
  • 0 kudos

Sample code to read json from service bus queue in Azure

Hi,I'm looking for sample notebook or code snippet to read messages from Azure Service bus queues. I looked for documentation couldn't find anything. Any help would be appreciated. First we are thinking of batch mode before we move on to Streaming. P...

Data Engineering
azure
deltalivetable
messagequeue
servicebus
Servicebus azure deltalivetables message queue
  • 1151 Views
  • 0 replies
  • 0 kudos
Navashakthi
by New Contributor
  • 1414 Views
  • 4 replies
  • 2 kudos

Resolved! Community Edition Sign-up Issue

Hi, I'm trying to signup community edition for learning purpose. The sign up page has issue in selecting country. The select dropdown doesn't work and continue option redirects to same page. Couldn't complete signup. Kindly help!

  • 1414 Views
  • 4 replies
  • 2 kudos
Latest Reply
amitdas2k6
New Contributor II
  • 2 kudos

for me it is alwas displaying below error but entered correct user name and passowrd,my user name : amit.das2k16@gmail.com Invalid email address or passwordNote: Emails/usernames are case-sensitive 

  • 2 kudos
3 More Replies
Shadowsong27
by New Contributor III
  • 10613 Views
  • 14 replies
  • 4 kudos

Resolved! Mongo Spark Connector 3.0.1 seems not working with Databricks-Connect, but works fine in Databricks Cloud

On latest DB-Connect==9.1.3 and dbr == 9.1, retrieving data from mongo using Maven coordinate of Mongo Spark Connector: org.mongodb.spark:mongo-spark-connector_2.12:3.0.1 - https://docs.mongodb.com/spark-connector/current/ - working fine previously t...

  • 10613 Views
  • 14 replies
  • 4 kudos
Latest Reply
mehdi3x
New Contributor II
  • 4 kudos

Hi everyone the solution for me it was to replace spark.read.format("mongo") by spark.read.format("mongodb") my spark version is 3.3.2 and my mongodb version is 6.0.6 . 

  • 4 kudos
13 More Replies
erigaud
by Honored Contributor
  • 1550 Views
  • 4 replies
  • 1 kudos

Deploying existing queries and alerts to other workspaces

I have several queries and associated alerts in a workspace, and I would like to be able to deploy them to an other workspace, for example an higher environment. Since both queries and objects are not supported in repos, what is the way to go to easi...

  • 1550 Views
  • 4 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @erigaud  We haven't heard from you since the last response from @btafur , and I was checking back to see if her suggestions helped you. Or else, If you have any solution, please share it with the community, as it can be helpful to others.  Also, ...

  • 1 kudos
3 More Replies
dprutean
by New Contributor III
  • 535 Views
  • 0 replies
  • 0 kudos

JDBC DatabaseMetaData.getCatalogs()

Calling the DatabaseMetaData.getCatalogs() returns 'spark_catalogs instead' of 'hive_metastore', when connected to tradition version of databricks cluster which is not signed with uc_catalog tag.Please check this.

  • 535 Views
  • 0 replies
  • 0 kudos
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels
Top Kudoed Authors