cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Fresher
by New Contributor II
  • 57 Views
  • 1 replies
  • 0 kudos

users are deleted/ unsynced from azure AD to databricks

In azure AD, it's shows users are synced to Databricks. But in Databricks, it's showing users is not a part of the group. The user is not part of only one group , he is part of remaining groups. All the syncing works fine till yesterday. I don't now ...

  • 57 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Fresher, It sounds like you’re experiencing an issue with user synchronization between Azure AD and Databricks. Let’s troubleshoot this together! Here are some steps you can take to resolve the issue: Check SCIM Provisioning Configuration: En...

  • 0 kudos
chloeh
by New Contributor II
  • 67 Views
  • 1 replies
  • 0 kudos

Chaining window aggregations in SQL

In my SQL data transformation pipeline, I'm doing chained/cascading window aggregations: for example, I want to do average over the last 5 minutes, then compute average over the past day on top of the 5 minute average, so that my aggregations are mor...

  • 67 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @chloeh, You’re working with a Spark SQL data transformation pipeline involving chained window aggregations. Let’s look at your code snippet and see if we can identify the issue. First, let’s break down the steps you’ve implemented: You’re read...

  • 0 kudos
AdityaM
by New Contributor
  • 102 Views
  • 1 replies
  • 0 kudos

Creating external tables using gzipped CSV file - S3 URI without extensions

Hi Databricks community,Hope you are doing well.I am trying to create an external table using a Gzipped CSV file uploaded to an S3 bucket.The S3 URI of the resource doesn't have any file extensions, but the content of the file is a Gzipped comma sepa...

  • 102 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @AdityaM, It seems you’re encountering an issue with creating an external table from a Gzipped CSV file in Databricks using an S3 URI without file extensions. Let’s address this step by step. SerDe (Serializer/Deserializer): When creating an e...

  • 0 kudos
kazinahian
by New Contributor III
  • 97 Views
  • 1 replies
  • 0 kudos

Lowcode ETL in Databricks

Hello everyone,I work as a Business Intelligence practitioner, employing tools like Alteryx or various low-code solutions to construct ETL processes and develop data pipelines for my Dashboards and reports. Currently, I'm delving into Azure Databrick...

  • 97 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @kazinahian,  In the Azure ecosystem, you have a few options for building ETL (Extract, Transform, Load) data pipelines, including low-code solutions. Let’s explore some relevant tools: Azure Data Factory: Purpose: Azure Data Factory is a clou...

  • 0 kudos
MohammadWasi
by Visitor
  • 18 Views
  • 1 replies
  • 0 kudos

i can list out the file using dbutils but can not able to read files in databricks

I can list out the file using dbutils but can not able to read files in databricks. PFB in screenshot. I can able to see the  file using dbutils.fs.ls but when i try to read this file using read_excel then it is showing me an error like "FileNotFound...

MohammadWasi_0-1715064354700.png
Data Engineering
Databricks
  • 18 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @MohammadWasi, It seems like you’re encountering a common issue related to file paths when working with pd.read_excel in Python. Let’s troubleshoot this step by step: Check the File Path: First, ensure that the Excel file (abcd.xls) is indeed ...

  • 0 kudos
ashraf1395
by New Contributor
  • 22 Views
  • 1 replies
  • 0 kudos

Starting Serverless sql cluster on GCP

Hello there,I am trying to start a serverless databricks SQL cluster in GCP. I am following this databricks doc: https://docs.gcp.databricks.com/en/admin/sql/serverless.htmlI have checked that all my requirements are fulfilled for activating the clus...

Screenshot 2024-05-07 113120.png Screenshot 2024-05-07 113137.png
  • 22 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @ashraf1395, It seems you’re encountering some confusion while trying to enable the serverless SQL cluster in Databricks on Google Cloud Platform (GCP). Let’s troubleshoot this together! First, I appreciate that you’ve followed the steps outlin...

  • 0 kudos
jainshasha
by New Contributor II
  • 262 Views
  • 12 replies
  • 2 kudos

Job Cluster in Databricks workflow

Hi,I have configured 20 different workflows in Databricks. All of them configured with job cluster with different name. All 20 workfldows scheduled to run at same time. But even configuring different job cluster in all of them they run sequentially w...

  • 262 Views
  • 12 replies
  • 2 kudos
Latest Reply
emora
New Contributor II
  • 2 kudos

Honestly you shouldn't have any kind of limitation executing diferent workflows.I did a test case in my Databricks and if you have your workflows with a job cluster your shouldn't have limitation. But I did all my test in Azure and just for you to kn...

  • 2 kudos
11 More Replies
namankhamesara
by New Contributor II
  • 21 Views
  • 1 replies
  • 0 kudos

Error while running Databricks modules

Hi Databricks Community,I am following https://customer-academy.databricks.com/learn/course/1266/data-engineering-with-databricks?generated_by=575333&hash=6edddab97f2f528922e2d38d8e4440cda4e5302a this course provided by databricks. In this when I am ...

namankhamesara_0-1715054731073.png
Data Engineering
databrickscommunity
  • 21 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @namankhamesara, Thank you for reaching out! It appears there might be an issue with accessing the data for your course. To expedite your request and resolve this issue promptly, please list your concerns on our ticketing portal. Our support staff...

  • 0 kudos
Shazam
by New Contributor
  • 75 Views
  • 1 replies
  • 0 kudos

Ingestion time clustering -Initial load

As per info available ingestion time clustering makes use of time of the time a file is written or ingested in databricks. In a use case where there is  new delta table and an etl which runs in timely fashion(say daily) inserting records, am able to ...

  • 75 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Shazam, Great questions! Let’s break down each scenario: Initial Data Migration: When migrating data from an existing platform to Databricks, you might have a large initial load of records. In this case, ingestion time clustering can still be...

  • 0 kudos
Anske
by New Contributor II
  • 167 Views
  • 6 replies
  • 1 kudos

Resolved! DLT apply_changes applies only deletes and inserts not updates

Hi,I have a DLT pipeline that applies changes from a source table (cdctest_cdc_enriched) to a target table (cdctest), by the following code:dlt.apply_changes(    target = "cdctest",    source = "cdctest_cdc_enriched",    keys = ["ID"],    sequence_by...

Data Engineering
Delta Live Tables
  • 167 Views
  • 6 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @Anske, It seems you’re encountering an issue with your Delta Live Tables (DLT) pipeline where updates from the source table are not being correctly applied to the target table. Let’s troubleshoot this together! Pipeline Update Process: Whe...

  • 1 kudos
5 More Replies
halox6000
by New Contributor III
  • 38 Views
  • 1 replies
  • 0 kudos

How do i stop pyspark from outputting text

I am using a tqdm progress bar to monitor the amount of data records I have collected via API. I am temporarily writing them to a file in the DBFS, then uploading to a Spark DataFrame. Each time I write to a file, I get a message like 'Wrote 8873925 ...

  • 38 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @halox6000, To stop the progress bar output from tqdm, you can use the disable argument. Set it to True to silence any tqdm output. In fact, it will not only hide the display but also skip the progress bar calculations entirely1. Here’s an examp...

  • 0 kudos
MrD
by New Contributor
  • 77 Views
  • 1 replies
  • 0 kudos

Issue with autoscalling the cluster

Hi All, My job is breaking as the cluster is not able to autoscale. below is the log,can it be due to AWS vms are not spinning up or can be due to issue databricks configuration.Does anyone has faced it before ?TERMINATING Compute terminated. Reason:...

  • 77 Views
  • 1 replies
  • 0 kudos
Latest Reply
koushiknpvs
New Contributor III
  • 0 kudos

Hey MrD,I faced this issue while running Azure VMs. A restart and re atatching the cluster helped me. Please let me know if that works for you.

  • 0 kudos
smukhi
by New Contributor
  • 115 Views
  • 2 replies
  • 0 kudos

Encountering Error UNITY_CREDENTIAL_SCOPE_MISSING_SCOPE

As of this morning we started receiving the following error message on a Databricks job with a single Pyspark Notebook task. The job has not had any code changes in 2 months. The cluster configuration has also not changed. The last successful run of ...

  • 115 Views
  • 2 replies
  • 0 kudos
Latest Reply
smukhi
New Contributor
  • 0 kudos

As advised, I double confirmed that no code or cluster configuration was changed (even got a second set of eyes on it that confirmed the same).I was able to find a "fix" which puts a bandaid on the issue:I was able to pinpoint that the issue seems to...

  • 0 kudos
1 More Replies
Wolfoflag
by New Contributor II
  • 39 Views
  • 1 replies
  • 0 kudos

Threads vs Processes (Parallel Programming) Databricks

Hi Everyone,I am trying to implement parallel processing in databricks and all the resources online point to using ThreadPool from the pythons multiprocessing.pool library or concurrent future library. These libraries offer methods for creating async...

  • 39 Views
  • 1 replies
  • 0 kudos
Latest Reply
Wojciech_BUK
Contributor III
  • 0 kudos

I am not super expert but I have been using databricks for a while and I can say that - when you use any Python library like asyncio, ThredPool and so one - this is good only to some maintenance things, small api calls etc.When you want to leverage s...

  • 0 kudos
digui
by New Contributor
  • 1975 Views
  • 4 replies
  • 0 kudos

Issues when trying to modify log4j.properties

Hi y'all.​I'm trying to export metrics and logs to AWS cloudwatch, but while following their tutorial to do so, I ended up facing this error when trying to initialize my cluster with an init script they provided.This is the part where the script fail...

  • 1975 Views
  • 4 replies
  • 0 kudos
Latest Reply
cool_cool_cool
New Contributor II
  • 0 kudos

@digui Did you figure out what to do? We're facing the same issue, the script works for the executors.I was thinking on adding an if that checks if there is log4j.properties and modify it only if it exists

  • 0 kudos
3 More Replies
Labels
Top Kudoed Authors