cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

stackoftuts
by New Contributor
  • 333 Views
  • 0 replies
  • 0 kudos

AI uses

Delve into the transformative realm of AI applications, where innovation merges seamlessly with technology's limitless possibilities.Explore the multifaceted landscape of AI uses and its dynamic impact on diverse industries at StackOfTuts. 

  • 333 Views
  • 0 replies
  • 0 kudos
Kroy
by Contributor
  • 769 Views
  • 2 replies
  • 0 kudos

Resolved! Multi Customer setup

We are trying to do POC to have shared resource like compute across multiple customer, Storage will be different, Is this possible ?    

Kroy_0-1702375949921.png
  • 769 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Kroy , When it comes to shared compute resources in Databricks, there are some best practices and options you can consider:   Shared Access Mode for Clusters: Databricks allows you to create clusters in shared access mode. This means that multipl...

  • 0 kudos
1 More Replies
patojo94
by New Contributor II
  • 1759 Views
  • 2 replies
  • 3 kudos

Resolved! Stream failure JsonParseException

Hi all! I am having the following issue with a couple of pyspark streams. I have some notebooks running each of them an independent file structured streaming using  delta bronze table  (gzip parquet files) dumped from kinesis to S3 in a previous job....

Community Discussions
Photon
streaming aggregations
  • 1759 Views
  • 2 replies
  • 3 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 3 kudos

Hi @patojo94, You're encountering an issue with malformed records in your PySpark streams.    Let's explore some potential solutions:   Malformed Record Handling: The error message indicates that there are malformed records during parsing. By default...

  • 3 kudos
1 More Replies
Jay_adb
by New Contributor
  • 801 Views
  • 1 replies
  • 0 kudos

Resolved! Databricks Certification Exam Got Suspended. Need help in resolving the issue

Hi @Cert-Team ,My Databricks exam got suspended on December 9, 2023, at 11:30, and it is still in the suspended state.During the exam, it was initially paused due to poor lighting, but after addressing that, it worked fine. However, after some time, ...

  • 801 Views
  • 1 replies
  • 0 kudos
Latest Reply
Cert-Team
Honored Contributor III
  • 0 kudos

Hi @Jay_adb I'm sorry to hear you had this issue. Thanks for filing a ticket with the support team. I have sent a message to them to look into your ticket and resolve asap.

  • 0 kudos
JordanYaker
by Contributor
  • 571 Views
  • 0 replies
  • 0 kudos

DAB "bundle deploy" Dry Run

Is there a way to perform a dry-run with "bundle deploy" in order to see the job configuration changes for an environment without actually deploying the changes?

  • 571 Views
  • 0 replies
  • 0 kudos
Sujitha
by Community Manager
  • 7988 Views
  • 0 replies
  • 1 kudos

🌟 End-of-Year Community Survey 🌟

Hello Community Members,  We value your experience and want to make it even better! Help us shape the future by sharing your thoughts through our quick Survey. Ready to have your voice heard? Click   here  and take a few moments to complete the surv...

Screenshot 2023-12-11 at 2.38.17 PM.png
  • 7988 Views
  • 0 replies
  • 1 kudos
DBEnthusiast
by New Contributor III
  • 1049 Views
  • 3 replies
  • 1 kudos

Resolved! More than expected number of Jobs created in Databricks

Hi Databricks Gurus !I am trying to run a very simple snippet :data_emp=[["1","sarvan","1"],["2","John","2"],["3","Jose","1"]]emp_columns=["EmpId","Name","Dept"]df=spark.createDataFrame(data=data_emp, schema=emp_columns)df.show() --------Based on a g...

  • 1049 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

I want to express my gratitude for your effort in selecting the most suitable solution. It's great to hear that your query has been successfully resolved. Thank you for your contribution.

  • 1 kudos
2 More Replies
Soma
by Valued Contributor
  • 385 Views
  • 1 replies
  • 0 kudos

df.queryExecution.redactedSql is not working with Spark sql Listener

We are trying to capture the query executed by spark .We are trying to use df.queryExecution.redactedSql to get the SQL from query execution but it is not working in sqlListener

  • 385 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Soma, In PySpark, when you execute a query and want to capture the SQL from the query execution, you can use the explain() method. 

  • 0 kudos
TechMG
by New Contributor II
  • 783 Views
  • 0 replies
  • 0 kudos

power Bi paginate

Hello,I am facing similar kind of issue. I am working on Power BI paginated report and databricks is my source for the report. I was trying to pass the parameter by passing the query in expression builder as mentioned below. https://community.databri...

  • 783 Views
  • 0 replies
  • 0 kudos
Lazloo
by New Contributor III
  • 480 Views
  • 1 replies
  • 0 kudos

Using nested dataframes with databricks-connect>13.x

 We needed to move to databricks-connect>13.x. Now I facing the issue that when I work with a nested dataframe of the structure```root|-- a: string (nullable = true)|-- b: array (nullable = true)| |-- element: struct (containsNull = true)| | |-- c: s...

  • 480 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lazloo
New Contributor III
  • 0 kudos

In addition here is the full stack trace23/12/07 14:51:56 ERROR SerializingExecutor: Exception while executing runnable grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable@33dfd6ecgrpc_shaded.io.grpc...

  • 0 kudos
mvmiller
by New Contributor III
  • 1146 Views
  • 2 replies
  • 0 kudos

How to facilitate incremental updates to an SCD Type 1 table that uses SCD Type 2 source tables

I have an SCD Type 1 delta table (target) for which I am trying to figure out how to facilitate insert, updates, and deletes.  This table is sourced by multiple delta tables, with an SCD Type 2 structure, which are joined together to create the targe...

  • 1146 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @mvmiller, Implementing incremental updates for your SCD Type 1 delta table can be achieved using some effective strategies.   Let’s explore a few approaches:   Delta Lake and Slowly Changing Dimensions (SCD): Delta Lake, with its support for ACID...

  • 0 kudos
1 More Replies
dplaut
by New Contributor II
  • 1094 Views
  • 4 replies
  • 0 kudos

Save output of show table extended to table?

I want to save the output of     show table extended in catalogName like 'mysearchtext*';to a table.How do I do that?

  • 1094 Views
  • 4 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @dplaut, To save the output of the SHOW TABLE EXTENDED command to a table, you can follow these steps:   First, execute the SHOW TABLE EXTENDED command with the desired regular expression pattern. This command provides detailed information about t...

  • 0 kudos
3 More Replies
Sujitha
by Community Manager
  • 29125 Views
  • 3 replies
  • 7 kudos

Introducing the Data Intelligence Platforms

Introducing the Data Intelligence Platform, our latest AI-driven data platform constructed on a lakehouse architecture. It’s not just an incremental improvement over current data platforms, but a fundamental shift in product strategy and roadmap.   E...

Screenshot 2023-11-15 at 7.52.14 PM.png
  • 29125 Views
  • 3 replies
  • 7 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 7 kudos

Hmm I preferred naming related to water like data lake, delta lake and lakehouse

  • 7 kudos
2 More Replies
RahuP
by New Contributor II
  • 1294 Views
  • 2 replies
  • 0 kudos
  • 1294 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @RahuP, The error message you’re encountering, java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager.<init>indicates a mismatch between the version of the AWS SDK for Java and the method being called. Let’s break it down...

  • 0 kudos
1 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!