Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best prac...
Explore discussions on Databricks administration, deployment strategies, and architectural best prac...
Join discussions on data engineering best practices, architectures, and optimization strategies with...
Join discussions on data governance practices, compliance, and security within the Databricks Commun...
Explore discussions on generative artificial intelligence techniques and applications within the Dat...
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithm...
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Communi...
How can I make these people smarter or faster so the final answer is better?
Improving the performance of a Random Forest model on Databricks is usually about data quality, feature engineering, and hyperparameter tuning. Some tips:Feature Engineering:Create meaningful features and remove irrelevant ones.Encode categorical var...
Hello,we have already spent surprisingly many DBUs, although we have only uploaded a few tiny tables (9 Tables with approx. 10 lines).We had the idea to change the warehouse from serverless starter warehouse to classic 2x small in order to save DBUs....
Hello @jasmin_mbi! Did the suggestion shared above help resolve the issue with creating a classic SQL warehouse? If yes, please consider marking the response as the accepted solution.
I have Snowflake Iceberg tables whose metadata is stored in Snowflake Open Catalog. I am trying to read these tables from the Open Catalog and write back to the Open Catalog using Databricks.I have explored the available documentation but haven’t bee...
Greetings @Sunil_Patidar , Databricks and Snowflake can interoperate cleanly around Iceberg today — but how you do it matters. At a high level, interoperability works because both platforms meet at Apache Iceberg and the Iceberg REST Catalog API. Wh...
I’m trying to set up a BigQuery connection in Azure Databricks (Unity Catalog / Lakehouse Federation) using GCP Workload Identity Federation (WIF) instead of a GCP service account keyEnvironment:Azure Databricks workspaceBigQuery query federation via...
I guess that it is only one accepted as doc say "Google service account key json"
I have an issue with loading registered MLflow model into streamlit app inside the DatabricksThis is the sample code used for model load:import mlflowfrom mlflow.tracking import MlflowClientmlflow.set_tracking_uri("databricks")mlflow.set_registry_uri...
Authentication context isn’t automatically available in Apps. Notebooks automatically inject workspace host and token for mlflow when you use mlflow.set_tracking_uri("databricks") and mlflow.set_registry_uri("databricks-uc"). In Databricks Apps, you ...
How can I build my own PyTorch machine-learning model and train it faster on Databricks by using multiple machines/GPUs instead of just one?
@Suheb , You may look at the torch distributor. It provides multiple distributed training options, including single-node with multiple-GPU training and multi-node training. Below are the references for you. https://docs.databricks.com/aws/en/machine-...
We are currently using DLT tables in our target tables. The tables are getting loaded in continuous job pipelines.The liquid cluster is enabled in the tables. Will Vacuum work on these tables when it is getting loaded in continuous mode? How to run t...
VACUUM works fine on DLT tables running in continuous mode. DLT does automatic maintenance (OPTIMIZE + VACUUM) roughly every 24 hours if the pipeline has a maintenance cluster configured. Q: The liquid cluster is enabled in the tables. Will Vacuum wo...
Hello,I have created a vector search index for a delta table with 1,400 rows. Using this vector index to find matching records on a table with 52M records with the query below ran for 20hrs and failed with: 'HTTP request failed with status: {"error_c...
Hi @RodrigoE ,Your LATERAL subquery calls the Vector Search function once for every row of the 52M-row table, which results in tens of millions of remote calls to the Vector Search endpoint—this is not a nice pattern and will be extremely slow leadin...
When trying to filter in the catalog on "Tag", it throws an error that it failed to load values:The other filters do load:I have tried it with different computes and I have a view that has a tag (as shown in the screenshot).I have the following privi...
Hello @DylanStout! Did the suggestions shared above help resolve your concern? If so, please consider marking the response as the accepted solution.If you found a different approach that worked, sharing it would be helpful for others in the community...
Hello everyone,I am currently working on an email classification model in Azure Databricks. Since I work for an international company, the emails contain PII data. Because of this, I need to be very careful about compliance and data privacy, especial...
Hi, It is absolutely acceptable. Here are some details that you may want to consider. I'd also think about GPU availability in your cloud and region and whether there is GPU available for you to deploy these models to. You should be able to easily te...
Hi Team,We appear to be experiencing a recent regression in the AI/BI dashboard filter slicer behaviour.Steps to reproduceOpen a dashboard containing a single-select or multi-select filter slicer.Click into the slicer’s text input.Paste text from the...
Hi Tim, I can't find any mention of this internally. But I suspect it will be related to this change Multi-select filter paste: Viewers can now copy a column of values from a spreadsheet and paste them into a multi-select filter. My recommendation w...
I was going to ingestion Data from website to databricks but it is showing Public DBFS is not enableb message. is there any other way to automate data ingestion to databricks?
Hi @sher_1222 yes you can upload to cloud storage and then connect using unity catalog: Connect to cloud object storage using Unity Catalog - Azure Databricks | Microsoft Learnand then use What is Auto Loader? | Databricks on AWS to automatically ing...
This especially happens when the notebook is large with many cells. Even if I clear all the outputs scrolling the notebook is way too laggy. When I start running the code the memory consumption is 3-4GB minimum even if I am not displaying any data/ta...
Hi, these are teh recommended troubleshooting steps we have: Troubleshooting & Immediate Workarounds Browser Recommendations: Use an incognito/private window to avoid interference from browser extensions/ad blockers.Monitor memory consumption; close...
I’m using a Databricks workspace on AWS (region: us-west-2). My Serverless notebook (CPU) cannot access any external URL — every outbound request fails at DNS resolution.Minimal test in a notebook:import urllib.requesturllib.request.urlopen("https://...
Hi, Here are some troubleshooting steps: 1. Network Connectivity Configuration (NCC) Confirm that the correct NCC (such as ncc_public_internet) is attached specifically to Serverless compute, not just to SQL Warehouses or other resources.After making...
I am trying to write some unittests using pytest, but I am coming accross the problem of how to mock my dbutils method when dbutils isn't being defined in my notebook.Is there a way to do this so that I can unit test individual functions that are uti...
If this helps anyone here is how we do this:We rely on databricks_test for injecting dbutils into the notebooks that we're testing (which is a 3rd party package mind you and hasn't been updated in a while but still works). And in our notebooks we put...
| User | Count |
|---|---|
| 1825 | |
| 881 | |
| 722 | |
| 470 | |
| 312 |