Databricks Platform Discussions
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the co...
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the co...
Engage in vibrant discussions covering diverse learning topics within the Databricks Community. Expl...
I am unable to read the file in workspace/user in free edition, it was actually possible few weeks back but now it throws this error"[FAILED_READ_FILE.NO_HINT] Error while reading file dbfs:REDACTED_LOCAL_PART@outlook.com/BigMartSales.csv. SQLSTATE: ...
Hi everyone,We are planning a migration from Azure Databricks to GCP Databricks and would like to understand whether Databricks Asset Bundles (DAB) can be used to migrate workspace assets such as jobs, pipelines, notebooks, and custom serving endpoin...
@iyashk-DB Thanks for the details.. It helps.
Hello - I am following some online code to create a function as follows:-----------------------------------------CREATE OR REPLACE FUNCTION my_catalog.my_schema.insert_data_function(col1_value STRING,col2_value INT)RETURNS BOOLEANCOMMENT 'Inserts dat...
In UC, the functions must be read-only; they cannot modify state (no INSERT, DELETE, MERGE, CREATE, VACUUM, etc). So I tried to create a PROCEDURE and call it; I was able to insert data into the table successfully. Unity Catalog tools are really jus...
We are currently managing our permissions via Terraform (including cluster creation, UC governance, etc.). We have a specific `data_engineer` role, and we need everyone with this role to be able to view and manage all of our SDPs.The Issue: Currently...
Our Solution: We moved job and pipeline permissions to DAB configuration files for streamlined enforcement. Terraform will remain the source of truth for workspace-level permissions only.
How are bagging and boosting different when you use them in real machine-learning projects?
Bagging and boosting differ mainly in how they reduce error and when you’d choose them: Bagging (e.g., Random Forest) trains many models independently in parallel on different bootstrap samples to reduce variance, making it ideal for unstable, high-v...
Replacing records for the entire date with newly arriving data for the given date is a typical design pattern. Now, thanks to simple REPLACE USING in Databricks, it is easier than ever!
In the Databricks one demo, and even in official website I seehttps://www.databricks.com/blog/introducing-databricks-one"Domains", how to enable it ? Or add "Domain" to data?
I think domains are a new concept in Private Preview. If Unity Catalog has to become an enterprise catalog, then they will need to add that concept.
Is it possible to use Knowledge Assistant from Databricks one ?
Hello,I am creating a vector search index and selected Compute embeddings for a delta table with 19M records. Delta table has only two columns: ID (selected as index) and Name (selected for embedding). Embedding model is databricks-gte-large-en.Ind...
@RodrigoE please follow this document - https://docs.databricks.com/aws/en/machine-learning/foundation-model-apis/deploy-prov-throughput-foundation-model-apis#create-your-provisioned-throughput-endpoint-using-the-ui
Hello Guys, I use serveless on databricks Azure, so i have build a decorator which instanciate a SparkSession. My job use autolaoder / kafka using mode availableNow. Someone Knows which spark conf is required beacause i want to add it ? Thanx import...
Hi Databricks Team,I have completed my AI Agentic Fundamental certificate course today and haven't received my badge for the same. Can you guys help me for this.
I have completed Databricks Fundamentals Accreditation, but I haven't received the certification badge.Below attached here the certification, and my email address is: deelakawalagama@outlook.com.Also, I would like to add this badge to my LinkedIn pro...
Hello @Deelaka98 , Please check your credentials here. Please note that we cannot provide support via community (it is not advised that you post your email address here in community). Thanks & Regards,@cert-ops
Hello Guyz,Someone Know what's is the best pratices to setup databricks connect for Pycharm and VsCode using Docker, Justfile and .env file Cordially, Seefoods
Hi @seefoods!I’ve worked with Databricks Connect and VSCode in different projects, and although your question mentions Docker, Justfile and .env, the “best practices” really depend on what you’re trying to do. Here’s what has worked best for me:1.- D...
Hi Databricks Community! This is my first post in this forum, so I hope you can forgive me if it's not according to the forum best practices After lots of searching, I decided to share the peculiar issue I'm running into in this community.I try to lo...
I guess I was a bit over enthusiastic by accepting the answer.When I run the following on the single object array of arrays (as shown in the original post) I get a single row with column "value" and value null. from pyspark.sql import functions as F,...
Hi, I love the Databricks resources but I'm a little confused on what training to take. My focus is studying and practicing for the Databricks Engineer Associate exam, but when I hear of the 'training', I'm not sure which training people are referrin...
Hello @rc10000!+1 to what @Louis_Frolio mentioned above.The Learning Plan is designed for users preparing for the Databricks Certified Data Engineer Associate and Professional exams. Also below are a few paths, depending on what you’re looking for: ...
| User | Count |
|---|---|
| 1912 | |
| 923 | |
| 910 | |
| 478 | |
| 317 |