Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
Here's your Data + AI Summit 2024 - Warehousing & Analytics recap as you use intelligent data warehousing to improve performance and increase your organization’s productivity with analytics, dashboards and insights.
Keynote: Data Warehouse presente...
Databricks and Snowflake are both powerful platforms designed to address different aspects of data processing and analytics. Databricks shines in big data processing, machine learning, and AI workloads, while Snowflake excels in data warehousing, sto...
Hi, am sure am missing something as this should be something trivial but am struggling to find how to add a suffix with a date to a table name.Does anyone have a way to do this?Thanks
Hi @BobDobalina - Dynamic naming of table name is not allowed in DBSQL. However, you can try something similar
%python
from datetime import datetime
date_suffix = datetime.now().strftime("%Y%m%d")
table_name = f"students{date_suffix}"
spark.sql(f"CR...
We have a use case where we need to send notification to owners of each table/volume in a schema if the creation date for table/volume is greater than 30 days by triggering a notebook script or through Rest API. Will there be a chance that we get the...
Hi @subbaram - you can create a simple python script by quering system tables - system.information_schema.tables and create a dynamic list where creation date > 30 days and alert the table_owner via email.
Hope this helps !!!
Thanks,
Shan
Hi all, working on this project, my team plans to migrate some data from some databases to Databricks. We plan to run this migration by submitting queries to a warehouse through python on a local machine.Now I was wondering what would be the best app...
Hi,Your solution it good.but if i'm in charge of this migration i will:create the architecture of all tables with ist constraints in databrick warehouseexport all data in tables of mysql database as csv of txt fileswrite notebook with pyspark code to...
Hello dear community,i am trying to revoke permissions with the API for SQL-Warehouse. Granting permissions isn't a problem and works like a charme. But revoking won't function. I tried "NO_PERMISSIONS", "NO PERMISSIONS", "DENY", "REVOKE" But i alway...
Cuphead APK is your go-to destination for the latest versions of the beloved game, Cuphead, on Android. We provide a curated selection of up-to-date APK files, ensuring that you can enjoy the thrilling adventures of Cuphead on your Android device has...
Hi,I am sending databricks sql alerts to an email. I am trying to get the query results table in the body of the email.I have used a custom template with {{QUERY_RESULT_TABLE}}and this works fine for a teams alert. In Teams, I can see the table prope...
Hi,we have implemented a Databricks Workflow that saves an Excel Sheet to a Databricks Volume. Now we want to notify users with an Alert, when new data arrives in the volume.In the docs I found the SQL command LIST which returns the columns path, nam...
Hi @RobinK ,
I've tested your code and I was able to reproduce the error. Unfortunately, I haven't found a pure SQL alternative to selecting the results of the LIST command as part of a subquery or CTE, and create an alert based on that.
Fortunately...
Hii'm seeking some help creating visuals using HTML in SQL queries similar to those in the Retail Revenue & Supply Chain sample dashboards. When I create my queries based on these my results display the HTML code instead of the HTML formatted result...
Hello community,We're cloning (deep clones) data objects of the production catalog to our non-production catalog weekly. The non-production catalog is used to run our DBT transformation to ensure we're not breaking any production models. Lately, we h...
@adisalj have a small question how you are handling deep cloned data in target, are you created managed table with data that is being clone into target. can you please post sample query that you are using between your catalogs to do deep clone.i am f...
Hello,I am running spark structured streaming, reading from one table table_1, do some aggregation and then write results to another table. table_1 is partitioned by ["datehour", "customerID"]My code is like this:spark.readStream.format("delta").tabl...
To define the initial position please check this:https://learn.microsoft.com/en-us/azure/databricks/structured-streaming/delta-lake#specify-initial-position
Executing dbt as a Python package triggers about 200 import warnings when ran on Databricks Runtime 13.3 but not on 12.2. The warnings are all the same: <frozen importlib._bootstrap>:914: ImportWarning: ImportHookFinder.find_spec() not found; fallin...
Executing dbt as a Python package triggers about 200 import warnings when ran on Databricks Runtime 13.3 but not on 12.2. The warnings are all the same: <frozen importlib._bootstrap>:914: ImportWarning: ImportHookFinder.find_spec() not found; fallin...
Hi,How to get the distinct count from the below listed image,keywords = column nametable = appCatalogkeywords (column)"[""data"",""cis"",""mining"",""financial"",""pso"",""value""]""[""bzo"",""employee news"",""news""]""[""core.store"",""fbi""]""[""d...
Hi,How to get the distinct count from the below listed image,keywords = column nametable = appCatalogkeywords (column)"[""data"",""cis"",""mining"",""financial"",""pso"",""value""]""[""bzo"",""employee news"",""news""]""[""core.store"",""fbi""]""[""d...
I'd like to create Gantt charts using the dashboard function. It seems like this could be possible by adding some additional parameters in the bar plot functionality, but I don't see how to do it currently (if there is a way, would love an example!)....
I'm deploying a new workspace for testing the deployed notebooks. But when trying to import the python files as module in the newly deployed workspace, I'm getting an error saying "function not found".Two points to note here:1. If I append absolute p...
Hi @Retired_mod, I see your suggestion to append the necessary path to the sys.path. I'm curious if this is the recommendation for projects deployed via Databricks Asset Bundles. I want to maintain a project structure that looks something like this:p...
I want to create a simple application using Spark Structured Streaming to alert users (via email, SMS, etc.) when stock price data meets certain requirements.I have a data stream: data_streamHowever, I'm strugging to address the main issue: how users...
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.