Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Benefit: This will help simplify the where clauses of the consumers of the tables? Just query on the main date field if I need all the data for a day. Not an extra day field we had to make.
@Ryan Hager , yes it is possible using AUTO GENERATED COLUMNS since delta lake 1.2For example, you can automatically generate a date column (for partitioning the table by date) from the timestamp column; any writes into the table need only specify t...
Hi Team, Can you please suggest to me how to implement the late arriving dimension or early arriving fact with examples or any sample script for reference? I have to implement the same using pyspark.Thanks.
Hi @Hare Krishnan Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Than...
Passed the Fundamentals of the Databricks Lakehouse Platform Accreditation, but no badge recieved. Tried "https://v2.accounts.accredible.com/retrieve-credentials?" showing no badge.
Hi @Ranjeet Ahlawat ,Congratulations on the certification. For any certification you take in the databricks you will be receiving the certificate and the badge in 24-48 hours and sometimes in lesser time as well. All the best for your future certifi...
I cannot log in to my Databricks community account. I have already tried to receive support and no real support has been given. I attempt to reset my password, the link gets sent, but once I enter the new password it gets stuck permanently loading. I...
Hi @Ahmet Korkmaz Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Than...
Let's say I have a delta table in Azure databricks that stores the staff details (denormalized). I wanted to export the data in the JSON format and save it as a single file on a storage location. I need help with the databricks sql query to group/co...
After using spark.read.format("snowflake").options(**options).option("dbtable", "table_name").load() to read a table from Snowflake, when I then change the table from Snowflake and read it again, it gives me the first version of the table. I have wor...
Yes, that would work. However, it is a longish Snowflake query producing a number of tables that are all called by the Databricks notebook, so it requires quite a few changes. I'll use this alternative if I automate the process. However, I think this...
Previously we were able to see SQL queries inside spark.sql() like this:But now it just looks like a plain string: I know it's not a big issue, but it's still annoying to have to code in SQL while having it all be blue, it makes debugging more cumber...
Hi @Emilio Garza,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
Hi there,I need some help creating a new cluster policy that utilizes a single spot-instnace server to complete a job. I want to set this up as a job-compute to reduce costs and also utilize 1 spot instance.The jobs I need to ETL are very short and c...
Hi @Avkash Kana,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
i am loading the 1billion data from spark dataframe into target table, but in the 7.3 cluster it takes 3 hours to complete but after migrated to 10.4 cluster its taking 8 hours to complete , how can i reduce the time duration
Hi @Mohammed sadamusean,Could you provide more details on what are you doing? What type of transformations/actions are you doing? whats your source and sink? batch or streaming? all that information will help.
I am trying to apply RLS to the solution but Power BI only connects to Databricks(DB) using a token which cant be used in DB groups. Is there no other way to apply Row Level security using Power BI?
Hi @Daniel Gomes,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
It gives error[RequestId=5e57b66f-b69f-4e8b-8706-3fe5baeb77a0 ErrorClass=METASTORE_DOES_NOT_EXIST] No metastore assigned for the current workspace.using the following codespark.conf.set( "fs.azure.account.key.mystorageaccount.dfs.core.windows.net", ...
Hi @Farooq ur rehman,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
I have a requirement, where I need to apply inverse DQ rule on a table to track the invalid data. For which I can use the following approach:import dltrules = {}quarantine_rules = {}rules["valid_website"] = "(Website IS NOT NULL)"rules["valid_locatio...
You can get additional info from DLT event log which is in delta so you can load it as table https://docs.databricks.com/workflows/delta-live-tables/delta-live-tables-event-log.html#data-quality
Since databricks runtime 12.1 "WHEN NOT MATCHED BY SOURCE" was added to MERGE syntax. For example, using that option, we can quickly delete all target rows which doesn't match any source.
I'm trying to build gold level streaming live table based on two streaming silver live tables with left join.This attempt fails with the next error:"Append mode error: Stream-stream LeftOuter join between two streaming DataFrame/Datasets is not suppo...