- 1539 Views
- 2 replies
- 1 kudos
Resolved! Unpivoting data in live tables
I am loading data from CSV into live tables. I have a live delta table with data like this:WaterMeterID, ReadingDateTime1, ReadingValue1, ReadingDateTime2, ReadingValue2It needs to be unpivoted into this:WaterMeterID, ReadingDateTime1, ReadingValue1...
- 1539 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi @SamGreene, The stack function allows you to unpivot columns by rotating their values into rows. It’s available both in Scala and PySpark.
- 1 kudos
- 441 Views
- 0 replies
- 0 kudos
AI uses
Delve into the transformative realm of AI applications, where innovation merges seamlessly with technology's limitless possibilities.Explore the multifaceted landscape of AI uses and its dynamic impact on diverse industries at StackOfTuts.
- 441 Views
- 0 replies
- 0 kudos
- 1142 Views
- 2 replies
- 0 kudos
Resolved! Multi Customer setup
We are trying to do POC to have shared resource like compute across multiple customer, Storage will be different, Is this possible ?
- 1142 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @Kroy , When it comes to shared compute resources in Databricks, there are some best practices and options you can consider: Shared Access Mode for Clusters: Databricks allows you to create clusters in shared access mode. This means that multipl...
- 0 kudos
- 2305 Views
- 2 replies
- 3 kudos
Resolved! Stream failure JsonParseException
Hi all! I am having the following issue with a couple of pyspark streams. I have some notebooks running each of them an independent file structured streaming using delta bronze table (gzip parquet files) dumped from kinesis to S3 in a previous job....
- 2305 Views
- 2 replies
- 3 kudos
- 3 kudos
Hi @patojo94, You're encountering an issue with malformed records in your PySpark streams. Let's explore some potential solutions: Malformed Record Handling: The error message indicates that there are malformed records during parsing. By default...
- 3 kudos
- 1065 Views
- 1 replies
- 0 kudos
Resolved! Databricks Certification Exam Got Suspended. Need help in resolving the issue
Hi @Cert-Team ,My Databricks exam got suspended on December 9, 2023, at 11:30, and it is still in the suspended state.During the exam, it was initially paused due to poor lighting, but after addressing that, it worked fine. However, after some time, ...
- 1065 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Jay_adb I'm sorry to hear you had this issue. Thanks for filing a ticket with the support team. I have sent a message to them to look into your ticket and resolve asap.
- 0 kudos
- 712 Views
- 0 replies
- 0 kudos
DAB "bundle deploy" Dry Run
Is there a way to perform a dry-run with "bundle deploy" in order to see the job configuration changes for an environment without actually deploying the changes?
- 712 Views
- 0 replies
- 0 kudos
- 8107 Views
- 0 replies
- 1 kudos
🌟 End-of-Year Community Survey 🌟
Hello Community Members, We value your experience and want to make it even better! Help us shape the future by sharing your thoughts through our quick Survey. Ready to have your voice heard? Click here and take a few moments to complete the surv...
- 8107 Views
- 0 replies
- 1 kudos
- 1504 Views
- 3 replies
- 1 kudos
Resolved! More than expected number of Jobs created in Databricks
Hi Databricks Gurus !I am trying to run a very simple snippet :data_emp=[["1","sarvan","1"],["2","John","2"],["3","Jose","1"]]emp_columns=["EmpId","Name","Dept"]df=spark.createDataFrame(data=data_emp, schema=emp_columns)df.show() --------Based on a g...
- 1504 Views
- 3 replies
- 1 kudos
- 1 kudos
I want to express my gratitude for your effort in selecting the most suitable solution. It's great to hear that your query has been successfully resolved. Thank you for your contribution.
- 1 kudos
- 553 Views
- 1 replies
- 0 kudos
df.queryExecution.redactedSql is not working with Spark sql Listener
We are trying to capture the query executed by spark .We are trying to use df.queryExecution.redactedSql to get the SQL from query execution but it is not working in sqlListener
- 553 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Soma, In PySpark, when you execute a query and want to capture the SQL from the query execution, you can use the explain() method.
- 0 kudos
- 2033 Views
- 0 replies
- 0 kudos
power Bi paginate
Hello,I am facing similar kind of issue. I am working on Power BI paginated report and databricks is my source for the report. I was trying to pass the parameter by passing the query in expression builder as mentioned below. https://community.databri...
- 2033 Views
- 0 replies
- 0 kudos
- 691 Views
- 1 replies
- 0 kudos
Using nested dataframes with databricks-connect>13.x
We needed to move to databricks-connect>13.x. Now I facing the issue that when I work with a nested dataframe of the structure```root|-- a: string (nullable = true)|-- b: array (nullable = true)| |-- element: struct (containsNull = true)| | |-- c: s...
- 691 Views
- 1 replies
- 0 kudos
- 0 kudos
In addition here is the full stack trace23/12/07 14:51:56 ERROR SerializingExecutor: Exception while executing runnable grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable@33dfd6ecgrpc_shaded.io.grpc...
- 0 kudos
- 1682 Views
- 2 replies
- 0 kudos
How to facilitate incremental updates to an SCD Type 1 table that uses SCD Type 2 source tables
I have an SCD Type 1 delta table (target) for which I am trying to figure out how to facilitate insert, updates, and deletes. This table is sourced by multiple delta tables, with an SCD Type 2 structure, which are joined together to create the targe...
- 1682 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @mvmiller, Implementing incremental updates for your SCD Type 1 delta table can be achieved using some effective strategies. Let’s explore a few approaches: Delta Lake and Slowly Changing Dimensions (SCD): Delta Lake, with its support for ACID...
- 0 kudos
- 1528 Views
- 4 replies
- 0 kudos
Save output of show table extended to table?
I want to save the output of show table extended in catalogName like 'mysearchtext*';to a table.How do I do that?
- 1528 Views
- 4 replies
- 0 kudos
- 0 kudos
Hi @dplaut, To save the output of the SHOW TABLE EXTENDED command to a table, you can follow these steps: First, execute the SHOW TABLE EXTENDED command with the desired regular expression pattern. This command provides detailed information about t...
- 0 kudos
- 29619 Views
- 3 replies
- 7 kudos
Introducing the Data Intelligence Platforms
Introducing the Data Intelligence Platform, our latest AI-driven data platform constructed on a lakehouse architecture. It’s not just an incremental improvement over current data platforms, but a fundamental shift in product strategy and roadmap. E...
- 29619 Views
- 3 replies
- 7 kudos
- 7 kudos
Hmm I preferred naming related to water like data lake, delta lake and lakehouse
- 7 kudos
- 1704 Views
- 2 replies
- 0 kudos
java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager.<init>(Lcom/amazonaw
- 1704 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @RahuP, The error message you’re encountering, java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager.<init>indicates a mismatch between the version of the AWS SDK for Java and the method being called. Let’s break it down...
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
AI Summit
4 -
Azure
3 -
Azure databricks
3 -
Bi
1 -
Certification
1 -
Certification Voucher
2 -
Chatgpt
1 -
Community
7 -
Community Edition
3 -
Community Members
2 -
Community Social
1 -
Contest
1 -
Data + AI Summit
1 -
Data Engineering
1 -
Data Processing
1 -
Databricks Certification
1 -
Databricks Cluster
1 -
Databricks Community
11 -
Databricks community edition
3 -
Databricks Community Rewards Store
3 -
Databricks Lakehouse Platform
5 -
Databricks notebook
1 -
Databricks Office Hours
1 -
Databricks Runtime
1 -
Databricks SQL
4 -
Databricks-connect
1 -
DBFS
1 -
Dear Community
3 -
Delta
10 -
Delta Live Tables
1 -
Documentation
1 -
Exam
1 -
Featured Member Interview
1 -
HIPAA
1 -
Integration
1 -
LLM
1 -
Machine Learning
1 -
Notebook
1 -
Onboarding Trainings
1 -
Python
2 -
Rest API
11 -
Rewards Store
2 -
Serverless
1 -
Social Group
1 -
Spark
1 -
SQL
8 -
Summit22
1 -
Summit23
5 -
Training
1 -
Unity Catalog
4 -
Version
1 -
VOUCHER
1 -
WAVICLE
1 -
Weekly Release Notes
2 -
weeklyreleasenotesrecap
2 -
Workspace
1
- « Previous
- Next »