cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Joao_DE
by New Contributor III
  • 1070 Views
  • 2 replies
  • 0 kudos

Run pytest inside repos and store the results in dbfs

Hi everyone!I am trying to run pytest inside a notebook on repos and store the results inside dbfs but i am getting an error stating permission denied, does anyone know why this happens and the solution. Error:

image image
  • 1070 Views
  • 2 replies
  • 0 kudos
Latest Reply
Vartika
Moderator
  • 0 kudos

Hi @João Peixoto​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so ...

  • 0 kudos
1 More Replies
pvignesh92
by Honored Contributor
  • 1438 Views
  • 4 replies
  • 2 kudos

Resolved! Pls restrict Spamming

Hi @Vidula Khanna​ , Recently there has been too many spams posted in the community discussions. I'm sure you might have noticed them. Is there any chance to clear all of them and may be restrict them in some way so that the purpose of this community...

  • 1438 Views
  • 4 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 2 kudos

@Suteja Kanuri​ using Azure OpenAI GPT4 we could connect it to the community and use 2 features of it.Ask the question, "is it spam?" and verify the user post this way.Display ready answers by OpenAI so we will avoid asking over and over again duplic...

  • 2 kudos
3 More Replies
pandu
by New Contributor II
  • 1406 Views
  • 2 replies
  • 3 kudos

connect to Oracle database using JDBC and perform merge condition

I would like to connect to oracle database using JDBC driver and write a code to perform merge condition using python.

  • 1406 Views
  • 2 replies
  • 3 kudos
Latest Reply
Vartika
Moderator
  • 3 kudos

Hi @Venkata Krishna Jonnalagadda​ Hope you are well.Just checking in. If @John Lourdu​'s answer helped, would you let us know and mark the answer as best? If not, would you be happy to give us more information?Thanks!

  • 3 kudos
1 More Replies
William_Scardua
by Valued Contributor
  • 687 Views
  • 2 replies
  • 1 kudos

How to get executors info by SDK (Python)

Hi guys,How I get executors information to my cluster by SDK (Python) have any idea ?Thank you

executors
  • 687 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vartika
Moderator
  • 1 kudos

Hi @William Scardua​ We haven't heard from you since the last response from @josephk and I was checking back to see if it helped you.Or else, If you have any solution, please share it with the community, as it can be helpful to others. Also, Please d...

  • 1 kudos
1 More Replies
Anonymous
by Not applicable
  • 906 Views
  • 4 replies
  • 4 kudos

How to create a new group in the Databricks community? Dear esteemed community users, It is with great pleasure that we inform you of an important upd...

How to create a new group in the Databricks community?Dear esteemed community users,It is with great pleasure that we inform you of an important update regarding the creation of Groups on Community. As part of our continuous efforts to enhance your e...

  • 906 Views
  • 4 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

HI @Hubert Dudek​  and @Ratna Chaitanya Raju Bandaru​ : Thanks for pointing this out. This is going to be a design decision which we will take after looking into the ask carefully. Thanks for getting the conversation going. This really helps us.

  • 4 kudos
3 More Replies
JordiDekker
by New Contributor III
  • 1452 Views
  • 5 replies
  • 6 kudos

StreamCorruptedException, databricks-connect 9.1

Last week, around the 21st of march, we started having issues with databricks-connect (DBR 9.1 LTS). "databricks-connect test" works, but the following code snippet:from pyspark.sql import SparkSession     spark = SparkSession.builder.getOrCreate() s...

  • 1452 Views
  • 5 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Jordi Dekker​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers ...

  • 6 kudos
4 More Replies
Gk
by New Contributor III
  • 1485 Views
  • 2 replies
  • 1 kudos

DataFrame

How can we create empty dataframe in databricks and how many ways we can create dataframe?

  • 1485 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vartika
Moderator
  • 1 kudos

Hi @Govardhana Reddy​ Hope everything is going great.Does @Suteja Kanuri​'s answer help? If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. Cheers!

  • 1 kudos
1 More Replies
tlbarata
by New Contributor II
  • 1339 Views
  • 3 replies
  • 1 kudos

Outdated - Databricks Data Engineer associate v2 lesson DE 4.2

While following the video lesson and executing the notebook 4.2, I noticed that creating the CREATE Table "users_jdbc" command generates an EXTERNAL table, while the video and, notebook too, suggests it as being a Managed table.Here are some printscr...

1 - Create Table Describe extended command Decribe command from video lesson
  • 1339 Views
  • 3 replies
  • 1 kudos
Latest Reply
Vartika
Moderator
  • 1 kudos

Hi @Tiago Barata​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thank...

  • 1 kudos
2 More Replies
alejandrofm
by Valued Contributor
  • 1390 Views
  • 4 replies
  • 0 kudos

AppendDataExecV1 Taking a lot of time

Hi, I have a Pyspark job that takes about an hour to complete, when looking at the SQL tab on Spark UI I see this:Those processes run for more than 1 minute on a 60-minute process.This is Ganglia for that period (the last snapshot, will look into a l...

image image
  • 1390 Views
  • 4 replies
  • 0 kudos
Latest Reply
Vartika
Moderator
  • 0 kudos

Hi @Alejandro Martinez​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you...

  • 0 kudos
3 More Replies
ghofigjong
by New Contributor
  • 3114 Views
  • 2 replies
  • 1 kudos

Resolved! How does partition pruning work on a merge into statement?

I have a delta table that is partitioned by Year, Date and month. I'm trying to merge data to this on all three partition columns + an extra column (an ID). My merge statement is below:MERGE INTO delta.<path of delta table> oldData using df newData ...

  • 3114 Views
  • 2 replies
  • 1 kudos
Latest Reply
Umesh_S
New Contributor II
  • 1 kudos

Isn't the suggested idea only filtering the input dataframe (resulting in a smaller amount of data to match across the whole delta table) rather than prune the delta table for relevant partitions to scan?

  • 1 kudos
1 More Replies
Anonymous
by Not applicable
  • 5581 Views
  • 3 replies
  • 14 kudos

Resolved! No suitable driver error When configure the Databricks ODBC and JDBC drivers

Hi all,I've just encountered with this issue. Before I launched an My SQL database in RDS of AWS after use this simple code to create connection to it but it all fails with this error.Is there any additional step? or could anyone can take a look on i...

Image
  • 5581 Views
  • 3 replies
  • 14 kudos
Latest Reply
Jag
New Contributor II
  • 14 kudos

Hello, It looks issue with JDBC URL. When I am trying to access the Azure SQL database. I was facing the same issue. So I have created JDBC URL as below and it went well.jdbc:sqlserver://<serverurl>:1433;database=<databasename>;user=<username>@<serve...

  • 14 kudos
2 More Replies
alex_python
by New Contributor II
  • 764 Views
  • 3 replies
  • 0 kudos

Division Auto Truncates Decimal Even After Casting Inputs

Division of two numbers is auto truncating decimals and I can't get a more precise result.Example of things I've tried:10 / 60 => 0.17cast(10 as float) / cast(60 as float) => 0.17cast(cast(10 as float) / cast(60 as float) as float) => 0.17round(10 / ...

  • 764 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Alex Python​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 0 kudos
2 More Replies
Kanna1706
by New Contributor III
  • 1049 Views
  • 3 replies
  • 0 kudos

about .dbc notebook

I can't be able to import .dbc notebook into my community edition. Please help.

  • 1049 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kanna1706
New Contributor III
  • 0 kudos

I imported .dbc notebook using url successfully but I can't be able to import using upload file option and I didn't get either any error message or anything when I tried to import using upload file option.

  • 0 kudos
2 More Replies
Rajkishore
by New Contributor II
  • 5604 Views
  • 6 replies
  • 4 kudos

Need a way to show the non-trimmed data while query a table ?

When querying a json data , the values are getting trimmed. I need to see the full data for that field, is there any way to do so ?

  • 5604 Views
  • 6 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Raj Sethi​ We haven't heard from you since the last response from @Lakshay Goel​  and @Vigneshraja Palaniraj​ â€‹, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, a...

  • 4 kudos
5 More Replies
oleole
by Contributor
  • 4818 Views
  • 3 replies
  • 2 kudos

Resolved! Using "FOR XML PATH" in Spark SQL in sql syntax

I'm using spark version 3.2.1 on databricks (DBR 10.4 LTS), and I'm trying to convert sql server sql query to a new sql query that runs on a spark cluster using spark sql in sql syntax. However, spark sql does not seem to support XML PATH as a functi...

input output
  • 4818 Views
  • 3 replies
  • 2 kudos
Latest Reply
oleole
Contributor
  • 2 kudos

Posting the solution that I ended up using:%sql DROP TABLE if exists UserCountry; CREATE TABLE if not exists UserCountry ( UserID INT, Country VARCHAR(5000) ); INSERT INTO UserCountry SELECT L.UserID AS UserID, CONCAT_WS(',', co...

  • 2 kudos
2 More Replies
Labels
Top Kudoed Authors