Databricks Platform Discussions
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the co...
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the co...
Engage in vibrant discussions covering diverse learning topics within the Databricks Community. Expl...
How should the Databricks workspace folder architecture be designed to support cross-team collaboration, access governance, and scalability in an enterprise platform? Please suggest below or share some ideas from your experience ThanksNote: I'm new t...
Hi All,We are starting a new Databricks Developer Community publication on Medium, and this is a warm invitation to every Databricks professional, developer, and aspirant who believes in learning by sharing.We are doing this because Databricks learni...
Thanks @Louis_Frolio . Let us do it better together.
One thing becomes very clear when you spend time in the Databricks community: AI is no longer an experiment. It is already part of how real teams build, ship, and operate data systems at scale.For a long time, many organizations treated data engineer...
Thanks @Louis_Frolio for your kind words. Happy to contribute here.
Are you looking to simplify your international money transfers and manage your finances with ease? Buying a verified Wise account could be the game-changer you need. If you’re looking to send money internationally at low costs, consider opening a mul...
Hi,I cannot find so far a way to get programmatically (SQL/Python) the Subqueries(/Sub-statements) executions history records, shown in ADBricks UI Query History/Profile, that were executed during a TaskRun of Job, as shown in [red boxes] on the atta...
Greetings @ADBricksExplore , Short answer: there isn’t a supported public API that returns the “Substatements / Subqueries” panel you see in the Query History or Profile UI. The GraphQL endpoints the UI relies on are internal and not stable or suppo...
I’m working on a data usage use case and want to understand the right way to get read bytes and written bytes per table in Databricks, especially for Unity Catalog tables.What I wantFor each table, something like:DateTable name (catalog.schema.table)...
system.access.audit focuses on governance and admin/security events. It doesn’t capture per-table I/O metrics such as read_bytes or written_bytes.Use system.query.history for per-statement I/O metrics (read_bytes, written_bytes, read_rows, written_ro...
Hello Support,In the past, whenever I have logged in to access my Databricks Community Edition, I was given the option to log into the Legacy Edition or the Databricks Free Edition. However, now I'm forced to login to the Free Edition. Can someone le...
Hi @Carlton ,That's because they shutdown community edition at 1 January 2026. Now you can use Free Edition only (which is way superior).https://community.databricks.com/t5/announcements/psa-community-edition-retires-on-january-1-2026-move-to-the-fre...
Is there a better way to select source tables than having to manually select them 1 by 1. I have 96 tables and it's a pain. The gui keeps back to the schema and i have to search through all the tables again. Is there a way to import the tables using ...
So you dont see the option to edit the pipeline ?Oronce you click on edit pipeline you dont see the option to Switch to code version(YAML)Or After you Switch to code version(YAML) you can only view that yaml and cant edit it ?
Hello,I am currently attempting to integrate NetSuite with Databricks using the NetSuite JDBC driver version 8.10.184.0. When I attempt to ingestion information from NetSuite to Databricks, I find that the job fails with a checksum error and informs ...
RequirementsTo configure NetSuite for Databricks ingestion, you must have the following:A NetSuite account with a SuiteAnalytics JDBC drivers license.Access to the NetSuite2.com data source. The legacy netsuite.com data source is not supported.Admini...
Hi Folks,I've requirement to show the week number as ww format. Please see the below codeselect weekofyear(date_add(to_date(current_date, 'yyyyMMdd'), +35)). also plz refre the screen shot for result.
What Is Papa’s Freezeria?Papa’s Freezeria is part of the famous Papa Louie game series, where players take on the role of restaurant employees running one of Papa Louie’s many eateries. http://papasfreezeria.online/
In Oracle, I create schemas and tables and link tables together via the primary/foreign key to do SQL queries.In Databricks, I notice that I can create tables, but how do I link tables together for querying? Does Databricks queries need the key in t...
Primary & Foreign Keys are informational unlike Oracle. You can use Unity Catalog Lineage Graph for easily finding the relationships between tables in Databricks.
Below is how the folder structure of my project looks like: resources/ |- etl_event/ |- etl_event.job.yml src/ |- pipeline/ |- etl_event/ |- transformers/ |- transformer_1.py |- utils/ |- logger.py databricks.ym...
You dont need to use wheel files . Use glob as the key instead of file - https://docs.databricks.com/aws/en/dev-tools/bundles/resources#pipelinelibrariesHere is the screenshot .
Data + Gen AI is most effective when grounded in real data constraints.On Databricks, combining Gen AI with Spark and Delta accelerates prototyping and testing, but fundamentals still matter—schema design, realistic distributions, and domain understa...
Hello,I have a databricks workspace with sso authentication. the IDP is on azure.The client certificate expired and now, I can't log on to databricks to add the new one.How can I do? Any idea is welcomed.Thank you!!Best regards,daniela
This is an AWS Databricks workspace and your SSO is with EntraID? You'll need to create a Support Ticket and then Engineering can disable-SSO temporarily allowing you to login with user+OTP. The long term solution here is that you should: Set up Acco...
I'm trying to incrementally backup system.information_schema.table_privileges but facing challenges:No streaming support: Is streaming supported: FalseNo unique columns for MERGE: All columns contain common values, no natural key combinationNo timest...
information_schema is not a Delta Table, which is why you can't stream from it. They are basically views on top of the information coming straight from the control plane database. Also your query is actually going to be quite slow/expensive (you prob...
| User | Count |
|---|---|
| 1895 | |
| 946 | |
| 923 | |
| 478 | |
| 318 |