cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

priyansh
by New Contributor II
  • 155 Views
  • 0 replies
  • 0 kudos

UCX

Hey folks! I want to know what are the features that UCX does not provides in UC or specially Hive to UC Migration that can be done manually but not using UCX. As UCX is currently in developing mode so there are so many drawbacks, can someone share t...

  • 155 Views
  • 0 replies
  • 0 kudos
TinaN
by New Contributor III
  • 321 Views
  • 2 replies
  • 0 kudos

Resolved! Translating XMLNAMESPACE in SQL Databricks

We are loading a data source that contains XML. I am translating their queries to create views in Databricks. They use 'XMLNAMESPACES' to construct/parse XML.  Below is an example.  What is best practice for translating 'XMLNAMESPACES' in Databricks?...

  • 321 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @TinaN, To handle XMLNAMESPACES in Databricks, use the from_xml function for parsing XML data, where you can define namespaces within your parsing logic. Start by reading the XML data using spark.read.format("xml"), then apply the from_xml functio...

  • 0 kudos
1 More Replies
zll_0091
by New Contributor III
  • 222 Views
  • 1 replies
  • 0 kudos

Can I load the files based on the data in my table as variable without iterating through each row?

Hi,I have created this table which contains the data that I need for my source path and target table. source_path: /data/customer/sid={sid}/abc=1/attr_provider={attr_prov}/source_data_provider_code={src_prov}/So basically, the value of each row are c...

zll_0091_2-1722958875477.png zll_0091_1-1722958858553.png zll_0091_3-1722958975973.png
  • 222 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @zll_0091, To efficiently load only the necessary files without manually iterating through each row of your table, you can use Spark's DataFrame operations. First, read your table into a DataFrame and determine the maximum key value. Then, filter ...

  • 0 kudos
ozbieG
by New Contributor II
  • 292 Views
  • 2 replies
  • 0 kudos

Databricks Certification exam got Suspended - Need Support

Hello Team, @Cert-Team , @Cert-TeamOPS I faced a very bad experience while attempting my 1st DataBricks certification.I was asked to exit the exam multiple times by the support team saying technical issues. My test got rescheduled multiple times with...

  • 292 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @ozbieG, I'm sorry to hear your exam was suspended. Thank you for filing a ticket with our support team. Please allow the support team 24-48 hours to resolve. In the meantime, you can review the following documentation: Room requirements Behaviour...

  • 0 kudos
1 More Replies
bytetogo
by New Contributor
  • 187 Views
  • 1 replies
  • 0 kudos

What API Testing Tool Do You Use?

Hi Databricks!I am a relatively new developer that's looking for a solid API testing tool. I am interested in hearing about other developers, new or experienced, about their experiences with API testing tools, regardless if they are good or bad. I've...

  • 187 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Contributor
  • 0 kudos

Hi @bytetogo,In my daily work I use Postman. It has user-friendly interface, supports automated testing and has support for popular patterns and libraries. It is also compatible with Linux, MacOs, Windows.

  • 0 kudos
vigneshkannan12
by New Contributor
  • 1170 Views
  • 2 replies
  • 0 kudos

typing extensions import match error

I am trying to install the stanza library and try to create a udf function to create NER tags for my chunk_text in the dataframe.Cluster Config: DBR 14.3 LTS SPARK 3.5.0 SCALA 2.12below code:def extract_entities(text    import stanza    nlp = stanza....

  • 1170 Views
  • 2 replies
  • 0 kudos
Latest Reply
SaadhikaB
New Contributor II
  • 0 kudos

Was this resolved?I installed openai and tried to import and faced the below error. I also tried upgrading the libraries. However I ended up with the same error.Code: import openaiError: ImportError: cannot import name 'override' from 'typing_extensi...

  • 0 kudos
1 More Replies
zll_0091
by New Contributor III
  • 270 Views
  • 3 replies
  • 1 kudos

How to return the function result instead of the function syntax of a variable?

Hi,I'm trying to get the certain value of my variable in the for loop but it's returning the syntax instead of the value. Also, is it possible to covert this value to an integer? Thanks 

zll_0091_0-1723013056629.png
  • 270 Views
  • 3 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Contributor
  • 1 kudos

Hi @zll_0091 ,Could you provide more code? What's inside dsfd variable? What's your expected outcome?

  • 1 kudos
2 More Replies
thibault
by Contributor II
  • 742 Views
  • 3 replies
  • 1 kudos

Resolved! Databricks CLI bundle run multiple jobs

Hi!I am using bundles to deploy various workflows, and as part of a CI pipeline, I want to run integration tests. They are in notebooks that I deploy and run with databricks CLI bundle with Azure DevOps.This allows me to run only one job at a time as...

  • 742 Views
  • 3 replies
  • 1 kudos
Latest Reply
LettieTaylor
New Contributor II
  • 1 kudos

Thank you so much for the information.

  • 1 kudos
2 More Replies
slechtd
by New Contributor II
  • 444 Views
  • 6 replies
  • 0 kudos

Cannot sign-in at accounts.cloud.databricks.com

Hi,I have registered for Community Edition and can access it with no problems trough: https://community.cloud.databricks.com/login.htmlNow, I'm interested in completing the free "lakehouse fundamentals" training here and taking the quiz to get the ba...

  • 444 Views
  • 6 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @slechtd, To expedite your request, please list your concerns on our ticketing portal. Our support staff would be able to act faster on the resolution (our standard resolution time is 24-48 hours).  

  • 0 kudos
5 More Replies
abueno
by New Contributor III
  • 870 Views
  • 2 replies
  • 1 kudos

Resolved! Find and replace

Hi,Is there a "Find and replace" option to edit SQL code?  I am not referring to the "replace" function but something similar to Control  + shift + F in Snowflake or Control + F in MS Excel.

  • 870 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @abueno , Thanks for reaching out! Please review the response and let us know if it answers your question. Your feedback is valuable to us and the community. If the response resolves your issue, kindly mark it as the accepted solution. This will h...

  • 1 kudos
1 More Replies
RahulChaubey
by New Contributor III
  • 1749 Views
  • 4 replies
  • 1 kudos

Resolved! Can we get notebook owner using notebook path as parameter in api ?

I need to get the notebook owner using api or some other way by passing notebook path as parameter.

  • 1749 Views
  • 4 replies
  • 1 kudos
Latest Reply
Roy26
New Contributor II
  • 1 kudos

@Ayushi_Suthar Can you advise whether it's possible to make a request to add this feature? It would be incredibly useful for maintaining and auditing our notebooks

  • 1 kudos
3 More Replies
radix
by New Contributor II
  • 818 Views
  • 2 replies
  • 0 kudos

Liquid clustering on and dynamic overwrites

I use the following option to write from multiple tasks to the same table with overwrite (in Pyspark).option("partitionOverwriteMode", "dynamic")The table was created with partition by so it works as expected.I read about liquid clustering and it's b...

  • 818 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ranjeet1981
New Contributor II
  • 0 kudos

No it doesn't support partition overwrite. 

  • 0 kudos
1 More Replies
Sameekshaji
by New Contributor II
  • 601 Views
  • 3 replies
  • 1 kudos

Memory leak

I created a databricks JDBC connection class following the code ##https://docs.databricks.com/en/integrations/jdbc/authentication.htm I observed that after a number of execute SQL calls , there were 27000 instances of com.databricks.client.jdbc42.int...

  • 601 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @Sameekshaji, Could you please verify that you’re using the latest version of the Databricks JDBC driver? Sometimes, bugs in older versions can cause resource leaks.

  • 1 kudos
2 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors