I have a date range picker filter in Databricks Lakeview Dashboard, so when i open dashboard there is no date selected and i want to set a default date. Is that possible with lakeview dashboard filters?
Hi All,I have set up a DLT pipeline that is using Autoloader in a file notification mode.Everything runs smoothly for the first time. However, it seems like the next micro-batch did not trigger as I can see some events coming in the queue.But if I lo...
I ran the code in the cell as it was given in the presentation. But it failed. Can someone please help?The presentation is the second lesson in the second model of Data Engineering Associate exam prep.
Thanks Ajay-Pandey!This is error that I keep getting when I run the following: %run ./Includes/Classroom-Setup-02.3LI have run dbutils.library.restartPython(), but it did not help.Note: you may need to restart the kernel using dbutils.library.restart...
Hi, I'm trying to create some 3D charts. With the same code and same cluster, sometimes it can show, sometimes it cannot. Previously it cannot display, but last week I opened a notebook with failed run and found the result can be shown by itself (as ...
Also, with same code, same browser, different workspaces, one works, other one not. In the notebook with "script error", if I "Export cell" and get its iframe html and use displayHTML to display it, it works, so this means the JS and HTML inside is o...
I have a use case to create a table using JSON files. There are 36 million files in the upstream(S3 bucket). I just created a volume on top of it. So the volume has 36M files. I'm trying to form a data frame by reading this volume using the below sp...
Hi @Sampath_Kumar, Let’s delve into the limitations and best practices related to Databricks volumes.
Volume Limitations:
Managed Volumes: These are Unity Catalog-governed storage volumes created within the default storage location of the contain...
I'm just getting started with Databricks and wondering if it is possible to ingest a GeoJSON or GeoParquet file into a new table without writing code? My goal here is to load vector data into a table and perform H3 polyfill operations on all the vect...
I'm looking at this page (Databricks Asset Bundles development work tasks) in the Databricks documentation.When repo assets are deployed to a databricks workspace, it is not clear if the "databricks bundle deploy" will remove files from the target wo...
One further question:The purpose of “databricks bundle destroy” is to remove all previously-deployed jobs, pipelines, and artifacts that are defined in the bundle configuration files.Which bundle configuration files? The ones in the repo? Or are ther...
I need to use DeltaLog class in the code to get the AddFiles dataset. I have to keep the implemented code in a repo and run it in databricks cluster. Some docs say to use org.apache.spark.sql.delta.DeltaLog class, but it seems databricks gets rid of ...
Thanks for providing a solution @pokus .What I dont understand is why Databricks cannot provide the DeltaLog at runtime. How can this be the official solution? We need a better solution for this instead of depending on reflections.
Hey FolksI have dbc file in a git repo and i cloned in the databricks when tried to open the .dbc file it is saying ```Failed to load file. The file encoding is not supported```can anyone please advice me on this #help #beginner
Hi allThe mandatory rowTag for writing to XML cause doesn't make any sense as I have the complete nested dataframe schema.In my case I need to implement an extra step to remove that extra node (default: Row) after xml generation.I need some examples ...
Hi @RobsonNLPT, Working with XML in Scala using the scala-xml library can be powerful and flexible.
Let’s break down your requirements and provide an example of how to achieve this.
Removing the “Row” Node: When converting a DataFrame to XML, th...
I use below code to connect to postgresql.
df = spark.read \
.jdbc("jdbc:postgresql://hostname:5432/dbname", "schema.table",
properties={"user": "user", "password": "password"})\
.load()
df.printSchema()
However, I got the ...
facing issue with integrating our Spring boot JPA supported application with Databricks.Below are the steps and setting we did for the integration.When we are starting the spring boot application we are getting a warning as :HikariPool-1 - Driver doe...
Hi @satishnavik, It seems you’re encountering issues while integrating your Spring Boot JPA application with Databricks.
Let’s address the warnings and exceptions you’re facing.
Warning: Driver Does Not Support Network Timeout for Connections
The...
I am trying to monitor when a table is created or updated using the audit logs. I have found that structured streaming writes/appends are not captured in the audit logs? Am I missing something shouldn't this be captured as a unity catalog event. Eith...
Hi @Hertz, Monitoring table creation and updates using audit logs is essential for maintaining data governance and security.
Let’s explore this further.
Databricks, being a cloud-native platform, provides audit logs that allow administrators to t...