cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

fazlu_don23
by New Contributor III
  • 97 Views
  • 0 replies
  • 0 kudos

ronaldo is back

create table SalesReport(TerritoryName NVARCHAR(50), ProductName NVARCHAR(100), TotalSales DECIMAL(10,2), PreviousYearSales DECIMAL(10,2), GrowthRate DECIMAL(10,2));  create table ErrorLog( ErrorID int, ErrorMessage nvarchar(max),ErrorDate datetime);...

  • 97 Views
  • 0 replies
  • 0 kudos
alesventus
by New Contributor III
  • 534 Views
  • 1 replies
  • 1 kudos

Save dataframe to the same variable

I would like to know if there is any difference if I save dataframe during tranformation to itself as first code or to new dataframe as second example.Thankslog_df = log_df.withColumn("process_timestamp",from_utc_timestamp(lit(current_timestamp()),"E...

  • 534 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @alesventus, When saving a DataFrame after transformation, there is no difference between saving it to itself or a new DataFrame. Both approaches will result in the same output. Sources:- [Docs: dataframes-python](https://docs.databricks.com/getti...

  • 1 kudos
Mohsen
by New Contributor
  • 1090 Views
  • 1 replies
  • 0 kudos

iceberg

Hi fellasi am working on databricks using icebergat first i have configured my notebook as belowspark.conf.set("spark.sql.catalog.spark_catalog","org.apache.iceberg.spark.SparkCatalog")spark.conf.set("spark.sql.catalog.spark_catalog.type", "hadoop")s...

  • 1090 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Mohsen,  • The exception "RuntimeMetaException: Failed to connect to Hive Metastore" occurs because the Hive metastore cannot find the version information. • To resolve the issue, follow the steps below:   - Set up a cluster with spark.sql.hive.m...

  • 0 kudos
Kaviana
by New Contributor III
  • 1067 Views
  • 1 replies
  • 0 kudos

how to configure EC2 instance connection in databricks

I would like to know how to configure to be aws instance connection, a VPC and an EC2 instance were configured and allowed IP ping ping server onpremise, how would it be possible to make connection in Databricks so that it can make a connection?

  • 1067 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Kaviana, To configure an AWS instance connection in Databricks, you need to follow these steps: 1. Create an access policy and a user with access keys in the AWS Console:  - Go to the IAM service.  - Click the Policies tab in the sidebar.  - Clic...

  • 0 kudos
lpf
by New Contributor
  • 1420 Views
  • 1 replies
  • 0 kudos

Changing StreamingWrite API in DBR 13.1 may lead to incompatibility with Spark 3.4

I'm using StarRocks Connector[2] to ingest data to StarRocks on DataBricks 13.1 (powered by Spark 3.4.0). The connector could run on community Spark 3.4, but fail on the DBR. The reason is (the full stack trace is attached)java.lang.IncompatibleClass...

  • 1420 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @lpf, Based on the information provided, there seems to be a compatibility issue between the StarRocks Connector and Databricks Runtime 13.1 (powered by Spark 3.4.0). The problem arises because the StarRocksWrite class implements both the BatchWri...

  • 0 kudos
olegmir
by New Contributor III
  • 882 Views
  • 2 replies
  • 1 kudos

Resolved! threads leakage when getConnection fails

Hi,we are using databricks jdbc https://mvnrepository.com/artifact/com.databricks/databricks-jdbc/2.6.33it seems like there is a thread leakage when getConnection failscould anyone advice?can be reproduced with @Test void databricksThreads() {...

  • 882 Views
  • 2 replies
  • 1 kudos
Latest Reply
olegmir
New Contributor III
  • 1 kudos

Hi,none of the above suggestion will not work...we already contacted databricks jdbc team, thread leakage was confirmed and was fixed in version 2.6.34https://mvnrepository.com/artifact/com.databricks/databricks-jdbc/2.6.34this leakage still exist if...

  • 1 kudos
1 More Replies
yhyhy3
by New Contributor III
  • 968 Views
  • 1 replies
  • 0 kudos

Displaying Dataframes with ipywidgets.Output is Adding Unexpected Commas

I am currently working in a databricks notebook and using an ipywidgets.Output to display a pandas Dataframe. Because spark.DataFrame cannot be displayed in an ipywidgets.Output widget, I have been using:import pandas as pd import numpy as np import ...

  • 968 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @yhyhy3, Based on the information, the issue you are facing seems related to the ipywidgets library. The ipywidgets package is used to create interactive elements in Databricks notebooks. However, there might be a compatibility issue with your ve...

  • 0 kudos
Policepatil
by New Contributor II
  • 522 Views
  • 1 replies
  • 0 kudos

Missing records while using limit in multithreading

Hi,I need to process nearly 30 files from different locations and insert records to RDS. I am using multi-threading to process these files parallelly like below. Test data:             I have configuration like below based on column 4: If column 4=0:...

image.png
  • 522 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Policepatil, Based on the given information, it seems that the issue occurs when filtering the records based on the record type. The missing documents are inconsistent and can arise from different files or even within the same file but with other...

  • 0 kudos
priyakant1
by New Contributor II
  • 470 Views
  • 1 replies
  • 0 kudos

Suspension of Data Engineer Professional exam

Hi Databricks TeamI had scheduled my exam on 6th sep 2023, during exam same pop up came up, stating that I am looking in some other direction. I told them that my laptop mouse is not working properly, so I was looking at it. But still they suspended ...

  • 470 Views
  • 1 replies
  • 0 kudos
Latest Reply
sirishavemula20
New Contributor III
  • 0 kudos

Hi @priyakant1 ,Have you got any response from the team, like did they reschedule your exam?

  • 0 kudos
sirishavemula20
by New Contributor III
  • 1328 Views
  • 3 replies
  • 1 kudos

Resolved! My exam has suspended , Need help Urgently (21/08/2023)

Hello Team,I encountered Pathetic experience while attempting my 1st DataBricks certification. Abruptly, Proctor asked me to show my desk, after showing he/she asked multiple times.. wasted my time and then suspended my exam.I want to file a complain...

  • 1328 Views
  • 3 replies
  • 1 kudos
Latest Reply
sirishavemula20
New Contributor III
  • 1 kudos

Sub: My exam Datbricks Data Engineer Associate got suspended_need immediate help please (10/09/2023)I encountered Pathetic experience while attempting my DataBricks Data engineer certification. Abruptly, Proctor asked me to show my desk, after showin...

  • 1 kudos
2 More Replies
Policepatil
by New Contributor II
  • 1808 Views
  • 2 replies
  • 1 kudos

Resolved! Records are missing while filtering the dataframe in multithreading

 Hi, I need to process nearly 30 files from different locations and insert records to RDS. I am using multi-threading to process these files parallelly like below.   Test data:               I have configuration like below based on column 4: If colum...

Policepatil_0-1694077661899.png
  • 1808 Views
  • 2 replies
  • 1 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 1 kudos

Looks like you are comparing to strings like "1", not values like 1 in your filter condition. It's hard to say, there are some details missing like the rest of the code and the DF schema, and what output you are observing.

  • 1 kudos
1 More Replies
VMeghraj
by New Contributor II
  • 845 Views
  • 2 replies
  • 0 kudos

Increase cores for Spark History Server

By default SHS uses spark.history.fs.numReplayThreads = 25% of avaliable cores (Number of threads that will be used by history server to process event logs)How can we increase the number of cores for Spark History Server ?

  • 845 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @VMeghraj, To increase the number of cores for Spark History Server, you can modify the spark.history.fs.numReplayThreads Configuration parameter.  You can set the desired number of cores by modifying the value of spark.history.fs.numReplayThreads...

  • 0 kudos
1 More Replies
meystingray
by New Contributor II
  • 936 Views
  • 1 replies
  • 0 kudos

Databricks Rstudio Init Script Deprecated

OK so I'm trying to use Open Source Rstudio on Azure Databricks.I'm following the instructions here: https://learn.microsoft.com/en-us/azure/databricks/sparkr/rstudio#install-rstudio-server-open-source-editionI've installed the necessary init script ...

  • 936 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @meystingray, The error message you're encountering is indicating that the init script path is not absolute. According to the Databricks documentation, init scripts should be stored as workspace files.  Here's how you can do it. 1. Store your ini...

  • 0 kudos
Policepatil
by New Contributor II
  • 5302 Views
  • 1 replies
  • 0 kudos

Is it good to process files in multithreading?

Hi,I need to process nearly 30 files from different locations and insert records to RDS.I am using multi-threading to process these files parallelly like below. def process_files(file_path):    <process files here>    1. Find bad records based on fie...

  • 5302 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Policepatil ,  - The approach of parallel processing files can increase the overall speed of the operation.- Multi-threading can optimize CPU usage but not necessarily make I/O operations faster.- I/O operations like reading and writing files are...

  • 0 kudos
bachan
by New Contributor II
  • 904 Views
  • 2 replies
  • 0 kudos

Data Insertion

Scenario: Data from blob storage to SQL db once a week.I have 15(from current date to next 15 days) days data into the blob storage, stored date wise in parquet format, and after seven days the next 15 days data will be inserted. Means till 7th day t...

  • 904 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @bachan, Based on your scenario, you might consider using Azure Data Factory (ADF) for your data pipeline. Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data.  Here ...

  • 0 kudos
1 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!