- 1101 Views
- 1 replies
- 0 kudos
I can't create a compute resource beyond "SQL Warehouse", "Vector Search" and "Apps"?
None of the LLMs even understand why I can't create a compute resource. I was using community (now free edition) until yesterday, when I became apparent that I needed the paid version, so I upgraded. I've even got my AWS account connected, which was ...
- 1101 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello Jeremyy,The free edition has some limitation in terms of computing. As you noticed that there is no such option to create a custom compute. the custom compute configurations and GPUs are not supported. Free Edition users only have access to ser...
- 0 kudos
- 672 Views
- 1 replies
- 0 kudos
Delete workspace in Free account
I created a free edition account and I used my google account for logging in. I see 2 works spaces got created. I want to delete one of them. How can I delete one of the workspace. If it is not possible, how can I delete my account as a whole?
- 672 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @upskill! Did you possibly sign in twice during setup? That can sometimes lead to separate accounts, each with its own workspace. Currently, there’s no self-serve option to remove a workspace or delete an account. You can reach out to help@data...
- 0 kudos
- 2627 Views
- 3 replies
- 1 kudos
DQ Expectations Best Practice
Hi there, I hope this is a fairly simple and straightforward question. I'm wondering if there's a "general" consensus on where along the DLT data ingestion + transformation process should data quality expectations be applied? For example, two very si...
- 2627 Views
- 3 replies
- 1 kudos
- 1 kudos
in my opinion, you can keep the bronze/raw layer as it is, and the quality check should be applied to silver.
- 1 kudos
- 1131 Views
- 2 replies
- 1 kudos
Resolved! Struggle to parallelize UDF
Hi all I have 2 clusters, that look identical but one runs my UDF in parallel another one does not.The ones that do is personal, the bad one is shared.import pandas as pd from datetime import datetime from time import sleep import threading # test f...
- 1131 Views
- 2 replies
- 1 kudos
- 1 kudos
As a side note "no isolation shared" cluster has no access to unity catalog, so no table queries.I resorted to using personal compute assigned to a group.
- 1 kudos
- 1497 Views
- 1 replies
- 0 kudos
How to override a in-built function in databricks
I am trying to override is_member() in-built function in such a way that, it always return true. How to do it in databricks using sql or python?
- 1497 Views
- 1 replies
- 0 kudos
- 0 kudos
To re-active this question. I have a similar requirement. I want to override shouldRetain(log: T, currentTime: Long) in class org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog, it also always return true
- 0 kudos
- 1614 Views
- 4 replies
- 0 kudos
Parameters in dashboards data section passing via asset bundles
A new functionality allows deploy dashboards with a asset bundles. Here is an example :# This is the contents of the resulting baby_gender_by_county.dashboard.yml file. resources: dashboards: baby_gender_by_county: display_name: "Baby gen...
- 1614 Views
- 4 replies
- 0 kudos
- 0 kudos
variables: catalog: description: "Catalog name for the dataset" default: "dev" parameters: catalog: ${var.catalog}doesn't replace parameter values prod -> dev in json when it is being deployed"datasets": [ { "displayName": "my_t...
- 0 kudos
- 535 Views
- 1 replies
- 0 kudos
Requirements for Managed Iceberg tables with Unity Catalog
Does Databricks support creating native Apache iceberg tables(managed) in unity catalog or is it possible only with private preview, so what are the requirements?
- 535 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @zent! Databricks now fully supports creating Apache Iceberg managed tables in Unity Catalog, and this capability is available in Public Preview (not just private preview). These managed Iceberg tables can be read and written by Databricks and ...
- 0 kudos
- 2385 Views
- 2 replies
- 1 kudos
Resolved! New Regional Group Request
Hello!How may I request and/or create a new Regional Group for the DMV Area (DC, Maryland, Virginia).Thank you,—Anton@DB_Paul @Sujitha
- 2385 Views
- 2 replies
- 1 kudos
- 1 kudos
Is there a group you already created??
- 1 kudos
- 1199 Views
- 3 replies
- 3 kudos
Resolved! How be a part of Databricks Groups
Hello, I am part of a Community Databricks Crew LATAM, where we have achieved 300 people connected and we have executed 3 events, one by month, we want to be part of Databricks Groups but we dont know how to do that, if somebody can help me I will a...
- 1199 Views
- 3 replies
- 3 kudos
- 3 kudos
Hi Ana, Thanks for reaching out! I won’t be attending DAIS this time, but we do have a Databricks Community booth set up near the Expo Hall. My colleague @Sujitha will be there. Do stop by to say hi and learn about all the exciting things we have go...
- 3 kudos
- 122 Views
- 0 replies
- 0 kudos
How is ur experience with dbx 2025
How is ur experience with dbx 2025
- 122 Views
- 0 replies
- 0 kudos
- 5191 Views
- 7 replies
- 7 kudos
Chrome/Edge high memory usage for Databricks tabs.
Is it normal for Databricks tabs to be using such high memory?The Chrome example I just got a screenshot of was this (rounded up/down)...3 x Databricks tabs for one user, sized at6gb, 4.5gb, and 2gbTotal = 12.5gbI know it gets higher than this too, I...
- 5191 Views
- 7 replies
- 7 kudos
- 7 kudos
Lately, I've noticed that Databricks is consuming a lot of memory (from my local machine) in the Chrome tab. I see memory spikes especially when I'm using the SQL editor extensively — at some point, there's even a noticeable delay between typing and ...
- 7 kudos
- 2074 Views
- 2 replies
- 0 kudos
How to "Python versions in the Spark Connect client and server are different. " in UDF
I've read all relevant articles but none have solution that I could understand. Sorry I'm new to it.I have a simple UDF to demonstrate the problem:df = spark.createDataFrame([(1, 1.0, 'a'), (1, 2.0, 'b'), (2, 3.0, 'c'), (2, 5.0, 'd'), (2, 10.0, 'e')]...
- 2074 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @Dimitry ,The error you're seeing indicates that the Python version in your notebook (3.11) doesn't match the version used by Databricks Serverless, which is typically Python 3.12. Since Serverless environments use a fixed Python version, this mis...
- 0 kudos
- 427 Views
- 1 replies
- 1 kudos
Databricks Dashboard run from Job issue
Hello, i am trying to trigger a databricks dashboard via workflow task.1.when i deploy the job triggering the dashboard task via local "Deploy bundle" command deployment is successful.2. when i try to deploy to a different environment via CICD while ...
- 427 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @anilsampson ,The error means your dashboard_task is not properly nested under the tasks section.tasks:- task_key: dashboard_task dashboard_task: dashboard_id: ${resources.dashboards.nyc_taxi_trip_analysis.id} warehouse_id: ${var.warehouse_...
- 1 kudos
- 3681 Views
- 6 replies
- 2 kudos
In databricks deployment .py files getting converted to notebooks
A critical issue has arisen that is impacting our deployment planning for our client. We have encountered a challenge with our Azure CI/CD pipeline integration, specifically concerning the deployment of Python files (.py). Despite our best efforts, w...
- 3681 Views
- 6 replies
- 2 kudos
- 2 kudos
Another option is Databricks Asset Bundles.
- 2 kudos
- 1023 Views
- 1 replies
- 2 kudos
Resolved! Cannot run merge statement in the notebook
Hi allI'm trialing Databricks for running complex python integration scripts. It will be different data sources (MS SQL, CSV files etc.) that I need to push to a target system via GraphQL. So I selected Databricks vs MS Fabric as it can handle comple...
- 1023 Views
- 1 replies
- 2 kudos
- 2 kudos
Hi @Dimitry ,The issue you're seeing is due to delta.enableRowTracking = true. This feature adds hidden _metadata columns, which serverless compute doesn't support, that's why the MERGE fails there.Try this out:You can disable row tracking with:ALTER...
- 2 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
1 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
API Documentation
3 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
2 -
Auto-loader
1 -
Autoloader
4 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
5 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Community Edition
3 -
Community Event
1 -
Community Group
1 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
2 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
2 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta
22 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
2 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Migration
1 -
ML Model
1 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
Session
1 -
Sign Up Issues
2 -
Spark
3 -
Spark Connect
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
User | Count |
---|---|
133 | |
88 | |
42 | |
42 | |
30 |