- 500 Views
- 2 replies
- 1 kudos
How to tag/ cost track Databricks Data Profiling?
We recently started using the Data Profiling/ Lakehouse monitoring feature from Databricks https://learn.microsoft.com/en-us/azure/databricks/data-quality-monitoring/data-profiling/. Data Profiling is using serverless compute for running the profilin...
- 500 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi @szymon_dybczak Thanks for th quick replay.But it seems serverless budget policies cannot be applied to data profiling/ monitoring jobs. https://learn.microsoft.com/en-us/azure/databricks/data-quality-monitoring/data-profiling/Serverless budget po...
- 1 kudos
- 315 Views
- 1 replies
- 1 kudos
Wheel File name is changed after using Databricks Asset Bundle Deployment on Github Actions
Hi Team,I am deploying to the Databricks workspace using GitHub and DAB. I have noticed that during deployment, the wheel file name is being converted to all lowercase letters (e.g., pyTestReportv2.whl becomes pytestreportv2.whl). This issue does not...
- 315 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @Nisha_Tech It seems like a Git issue rather than Databricks or DAB. There is a git configuration parameter decides the upper case/ lower case of the file names deploved. Please refer here: https://github.com/desktop/desktop/issues/2672#issuecomme...
- 1 kudos
- 1531 Views
- 2 replies
- 0 kudos
I can't create a compute resource beyond "SQL Warehouse", "Vector Search" and "Apps"?
None of the LLMs even understand why I can't create a compute resource. I was using community (now free edition) until yesterday, when I became apparent that I needed the paid version, so I upgraded. I've even got my AWS account connected, which was ...
- 1531 Views
- 2 replies
- 0 kudos
- 0 kudos
I have a similar issue and how can I upgrade?
- 0 kudos
- 6019 Views
- 2 replies
- 0 kudos
Cluster Auto Termination Best Practices
Are there any recommended practices to set cluster auto termination for cost optimization?
- 6019 Views
- 2 replies
- 0 kudos
- 0 kudos
Hello!We use a Databricks Job to adjust the cluster’s auto-termination setting based on the day and time. During weekdays from 6 AM to 6 PM, we set auto_termination_minutes to None, which keeps the cluster running. Outside those hours, and on weekend...
- 0 kudos
- 429 Views
- 0 replies
- 2 kudos
Webinar on Unity Catalog plus giveway
Building Trust in Data: Why Governance Matters More Than EverData has become the heartbeat of every organisation, driving decisions, shaping strategies, and fuelling innovation.Yet, even in data-rich companies, there’s a quiet problem that every lead...
- 429 Views
- 0 replies
- 2 kudos
- 499 Views
- 3 replies
- 0 kudos
Migration from Oracle DB in Linux server to Azure Databricks
Can anyone provide steps for the migration steps
- 499 Views
- 3 replies
- 0 kudos
- 0 kudos
Can anyone share high level steps to migrate from Oracle container DB 19c to Azure Databricks
- 0 kudos
- 732 Views
- 1 replies
- 0 kudos
Issue with UCX Assessment Installation via Automation Script - Schema Not Found Error
Hello,I'm encountering an issue while installing UCX Assessment via an automation script in Databricks. When running the script, I get the following error:13:38:06 WARNING [databricks.labs.ucx.hive_metastore.tables] {listing_tables_0} failed-table-c...
- 732 Views
- 1 replies
- 0 kudos
- 0 kudos
The error occurs because the automation script explicitly sets WORKSPACE_GROUPS="<ALL>" and DATABASES="<ALL>", which the UCX installer interprets literally as a schema called "ALL"—instead of using the special meaning that the manual prompt does when...
- 0 kudos
- 3099 Views
- 1 replies
- 0 kudos
Usability bug in SPARKSQL
Hi Databricks & Team, Spark Cluster: 16.3Being a databricks partner I am unable to raise a support ticket, hence am positing this here. pyspark is good in rendering multiple results in a single cell. refer screenshot below (Screenshot 1)However, SPAR...
- 3099 Views
- 1 replies
- 0 kudos
- 0 kudos
Databricks notebooks currently support multiple outputs per cell in Python (pyspark) but do not provide the same behavior for SQL cells. When running several SQL statements in a single notebook cell, Databricks will only render the output from the la...
- 0 kudos
- 1580 Views
- 2 replies
- 0 kudos
Workflow entry point not working on runtime 16.4-LTS
Hello all,I'm developing a python code that it is packaged as a wheel and installed inside a docker image. However, since this program requires numpy >= 2.0 I'm forced to use the runtime 16.4-LTS.When I try to run it as a workflow on databricks I'm e...
- 1580 Views
- 2 replies
- 0 kudos
- 0 kudos
Databricks Runtime 16.4-LTS introduced some changes that affect workflow configuration, environment management, and, notably, support for Python third-party libraries like NumPy. The main issue you’re encountering with NumPy >= 2.0 in Databricks work...
- 0 kudos
- 1036 Views
- 3 replies
- 3 kudos
Resolved! How to update Databricks official documentation?
Databricks provides excellent documentation. But in some rare cases I found it could have been improved. For example in the documentation for Data Profiling requirements (https://docs.databricks.com/aws/en/data-quality-monitoring/data-profiling/#requ...
- 1036 Views
- 3 replies
- 3 kudos
- 3 kudos
@Charuvil, you can send the feedback to the Databricks team via email link below: Send us feedback
- 3 kudos
- 4688 Views
- 5 replies
- 2 kudos
Resolved! Cluster by auto pyspark
I can find documentation to enable automatic liquid clustering with SQL code: CLUSTER BY AUTO. But how do I do this with Pyspark? I know I can do it with spark.sql("ALTER TABLE CLUSTER BY AUTO") but ideally I want to pass it as an .option().Thanks in...
- 4688 Views
- 5 replies
- 2 kudos
- 2 kudos
This is supported now for DBR 16.4+ for both DataframeWriterV1 and DataframeWriterV2 APIs, and also for DLT, and DataStreaming APIs. More details are here: https://docs.databricks.com/aws/en/delta/clustering . Basically using the option, `.option("cl...
- 2 kudos
- 708 Views
- 3 replies
- 3 kudos
where do I find the AI/BI dashboard?
I only see dashboard under the header SQL on the left menu, but I expected a header data. Got the feeling that under SQL, the dashboard lacks functionality.Best regards,Stefaan
- 708 Views
- 3 replies
- 3 kudos
- 3 kudos
@Neirynck I'd HIGHLY recommend checking out Databricks' Youtube channel or Demo Center when you're exploring part of the product. I'll link them below. @Neirynck as for your question, here's a video on AI/BI that actually sold Databricks to me: https...
- 3 kudos
- 238 Views
- 0 replies
- 1 kudos
WHERE WE SHOULD HOST THE NEXT MEETUP IN THE US?
Hi all, I collaborate with a Databricks partner, and we've started organizing in-person Meetups in the states. We want to arrange another one, possibly in December or late January. Where do you think most people will be interested in joining? It's fr...
- 238 Views
- 0 replies
- 1 kudos
- 3102 Views
- 1 replies
- 0 kudos
Databricks workspace
I have one issue . I have created delta table and vector search index from the delta table. For a particular query if I am doing similarity search then sometimes I am getting the documents and sometimes I am not getting any documents. For example# qu...
- 3102 Views
- 1 replies
- 0 kudos
- 0 kudos
This inconsistent behavior in your Delta Table and vector search index is a common issue with semantic vector searches, especially when working with diverse or structured data like Excel file contents. There are several likely causes for why your sim...
- 0 kudos
- 1602 Views
- 1 replies
- 1 kudos
Resolved! What’s the difference between the Free/Community Edition vs the fully paid version of Databricks, an
Databricks has a free version (called Community Edition) and a paid version. What are the main differences between them, and what things can’t I do in the free version that I can do in the paid one?
- 1602 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @Suheb .The Free Edition is intended for students, hobbyists, and aspiring data and AI professionals. It is not intended for commercial use. In addition, the Free Edition is subject to the following limitations:Databricks Free Edition limitations ...
- 1 kudos
-
.CSV
1 -
Access Data
2 -
Access Databricks
3 -
Access Delta Tables
2 -
Account reset
1 -
adcAws databricks
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
2 -
AI
5 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
api
1 -
Api Calls
1 -
API Documentation
4 -
App
2 -
Application
2 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
Aws databricks
1 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
7 -
Azure data disk
1 -
Azure databricks
16 -
Azure Databricks Delta Table
1 -
Azure Databricks Job
1 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
6 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
2 -
Blackduck
1 -
Bronze Layer
1 -
CDC
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Comments
1 -
Community Edition
4 -
Community Edition Account
1 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
csv
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineer Associate
1 -
Data Engineering
4 -
Data Explorer
1 -
Data Governance
1 -
Data Ingestion & connectivity
1 -
Data Ingestion Architecture
1 -
Data Processing
1 -
Databrick add-on for Splunk
1 -
databricks
4 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks App
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
4 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
4 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks Serverless
2 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks User Group
1 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
Delta Time Travel
1 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
DQX
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
Event Driven
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free Edition
1 -
Free trial
1 -
friendsofcommunity
1 -
GCP Databricks
1 -
GenAI
2 -
GenAI and LLMs
1 -
GenAI Course Material
1 -
Getting started
3 -
Google Bigquery
1 -
HIPAA
1 -
Hubert Dudek
2 -
import
2 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
JSON Object
1 -
LakeflowDesigner
1 -
Learning
2 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
3 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
meetup
2 -
Metadata
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model
1 -
Model Serving
1 -
Model Training
1 -
Module
1 -
Monitoring
1 -
Networking
2 -
Notebook
1 -
Onboarding Trainings
1 -
OpenAI
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
provisioned throughput
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Sant
1 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Software Development
1 -
Spark
1 -
Spark Connect
1 -
Spark scala
1 -
sparkui
2 -
Speakers
1 -
Splunk
2 -
SQL
8 -
streamlit
1 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
3 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
2 -
Venicold
3 -
Vnet
1 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
| User | Count |
|---|---|
| 140 | |
| 135 | |
| 57 | |
| 46 | |
| 42 |