- 3233 Views
- 1 replies
- 0 kudos
Is there a way to prevent databricks-connect from installing a global IPython Spark startup script?
I'm currently using databricks-connect through VS Code on MacOS. However, this seems to install (and re-install upon deletion) an IPython startup script which initializes a SparkSession. This is fine as far as it goes, except that this script is *glo...
- 3233 Views
- 1 replies
- 0 kudos
- 0 kudos
Databricks Connect on MacOS (and some other platforms) adds a file to the global IPython startup folder, which causes every new IPython session—including those outside the Databricks environment—to attempt loading this SparkSession initialization. Th...
- 0 kudos
- 4175 Views
- 1 replies
- 0 kudos
PYTEST: Module not found error
Hi,Apologies, as I am trying to use Pytest first time. I know this question has been raised but I went through previous answers but the issue still exists.I am following DAtabricks and other articles using pytest. My structure is simple as -tests--co...
- 4175 Views
- 1 replies
- 0 kudos
- 0 kudos
Your issue with ModuleNotFoundError: No module named 'test_tran' when running pytest from a notebook is likely caused by how Python sets the module import paths and the current working directory inside Databricks notebooks (or similar environments). ...
- 0 kudos
- 3555 Views
- 1 replies
- 0 kudos
CloudFormation Stack Failure: Custom::CreateWorkspace in CREATE_FAILED State
I am trying to create a workspace using AWS CloudFormation, but the stack fails with the following error:"The resource CreateWorkspace is in a CREATE_FAILED state. This Custom::CreateWorkspace resource is in a CREATE_FAILED state. Received response s...
- 3555 Views
- 1 replies
- 0 kudos
- 0 kudos
When a CloudFormation stack fails with “The resource CreateWorkspace is in a CREATE_FAILED state” for a Custom::CreateWorkspace resource, it typically means the Lambda or service backing the custom resource returned a FAILED signal to CloudFormation ...
- 0 kudos
- 3267 Views
- 1 replies
- 0 kudos
How to Define Constants at Bundle Level in Databricks Asset Bundles for Use in Notebooks?
I'm working with Databricks Asset Bundles and need to define constants at the bundle level based on the target environment. These constants will be used inside Databricks notebooks.For example, I want a constant gold_catalog to take different values ...
- 3267 Views
- 1 replies
- 0 kudos
- 0 kudos
There is currently no explicit, built-in mechanism in Databricks Asset Bundles (as of 2024) for directly defining global, environment-targeted constants at the bundle level that can be seamlessly accessed inside notebooks without using job or task pa...
- 0 kudos
- 161 Views
- 2 replies
- 0 kudos
Compilation Failing with Scala SBT build to be used in Databricks
Hi,We have scala jar build with sbt which is used in Databricks jobs to readstream data from kafka...We are enhancing the from_avro function like below... def deserializeAvro( topic: String, client: CachedSchemaRegistryClient, sc: SparkConte...
- 161 Views
- 2 replies
- 0 kudos
- 0 kudos
Thanks For the Update Louis... As we are planning to Sync All our notebook from Scala to Pyspark , we are in process of converting the code. I think Adding the additional dependency of ABRiS or Adobe’s spark-avro with Schema Registry support will tak...
- 0 kudos
- 142 Views
- 2 replies
- 1 kudos
How to tag/ cost track Databricks Data Profiling?
We recently started using the Data Profiling/ Lakehouse monitoring feature from Databricks https://learn.microsoft.com/en-us/azure/databricks/data-quality-monitoring/data-profiling/. Data Profiling is using serverless compute for running the profilin...
- 142 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi @szymon_dybczak Thanks for th quick replay.But it seems serverless budget policies cannot be applied to data profiling/ monitoring jobs. https://learn.microsoft.com/en-us/azure/databricks/data-quality-monitoring/data-profiling/Serverless budget po...
- 1 kudos
- 113 Views
- 1 replies
- 1 kudos
Wheel File name is changed after using Databricks Asset Bundle Deployment on Github Actions
Hi Team,I am deploying to the Databricks workspace using GitHub and DAB. I have noticed that during deployment, the wheel file name is being converted to all lowercase letters (e.g., pyTestReportv2.whl becomes pytestreportv2.whl). This issue does not...
- 113 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @Nisha_Tech It seems like a Git issue rather than Databricks or DAB. There is a git configuration parameter decides the upper case/ lower case of the file names deploved. Please refer here: https://github.com/desktop/desktop/issues/2672#issuecomme...
- 1 kudos
- 1243 Views
- 2 replies
- 0 kudos
I can't create a compute resource beyond "SQL Warehouse", "Vector Search" and "Apps"?
None of the LLMs even understand why I can't create a compute resource. I was using community (now free edition) until yesterday, when I became apparent that I needed the paid version, so I upgraded. I've even got my AWS account connected, which was ...
- 1243 Views
- 2 replies
- 0 kudos
- 0 kudos
I have a similar issue and how can I upgrade?
- 0 kudos
- 3711 Views
- 2 replies
- 0 kudos
Cluster Auto Termination Best Practices
Are there any recommended practices to set cluster auto termination for cost optimization?
- 3711 Views
- 2 replies
- 0 kudos
- 0 kudos
Hello!We use a Databricks Job to adjust the cluster’s auto-termination setting based on the day and time. During weekdays from 6 AM to 6 PM, we set auto_termination_minutes to None, which keeps the cluster running. Outside those hours, and on weekend...
- 0 kudos
- 162 Views
- 0 replies
- 2 kudos
Webinar on Unity Catalog plus giveway
Building Trust in Data: Why Governance Matters More Than EverData has become the heartbeat of every organisation, driving decisions, shaping strategies, and fuelling innovation.Yet, even in data-rich companies, there’s a quiet problem that every lead...
- 162 Views
- 0 replies
- 2 kudos
- 195 Views
- 3 replies
- 0 kudos
Migration from Oracle DB in Linux server to Azure Databricks
Can anyone provide steps for the migration steps
- 195 Views
- 3 replies
- 0 kudos
- 0 kudos
Can anyone share high level steps to migrate from Oracle container DB 19c to Azure Databricks
- 0 kudos
- 513 Views
- 1 replies
- 0 kudos
Issue with UCX Assessment Installation via Automation Script - Schema Not Found Error
Hello,I'm encountering an issue while installing UCX Assessment via an automation script in Databricks. When running the script, I get the following error:13:38:06 WARNING [databricks.labs.ucx.hive_metastore.tables] {listing_tables_0} failed-table-c...
- 513 Views
- 1 replies
- 0 kudos
- 0 kudos
The error occurs because the automation script explicitly sets WORKSPACE_GROUPS="<ALL>" and DATABASES="<ALL>", which the UCX installer interprets literally as a schema called "ALL"—instead of using the special meaning that the manual prompt does when...
- 0 kudos
- 2751 Views
- 1 replies
- 0 kudos
Usability bug in SPARKSQL
Hi Databricks & Team, Spark Cluster: 16.3Being a databricks partner I am unable to raise a support ticket, hence am positing this here. pyspark is good in rendering multiple results in a single cell. refer screenshot below (Screenshot 1)However, SPAR...
- 2751 Views
- 1 replies
- 0 kudos
- 0 kudos
Databricks notebooks currently support multiple outputs per cell in Python (pyspark) but do not provide the same behavior for SQL cells. When running several SQL statements in a single notebook cell, Databricks will only render the output from the la...
- 0 kudos
- 591 Views
- 2 replies
- 0 kudos
Workflow entry point not working on runtime 16.4-LTS
Hello all,I'm developing a python code that it is packaged as a wheel and installed inside a docker image. However, since this program requires numpy >= 2.0 I'm forced to use the runtime 16.4-LTS.When I try to run it as a workflow on databricks I'm e...
- 591 Views
- 2 replies
- 0 kudos
- 0 kudos
Databricks Runtime 16.4-LTS introduced some changes that affect workflow configuration, environment management, and, notably, support for Python third-party libraries like NumPy. The main issue you’re encountering with NumPy >= 2.0 in Databricks work...
- 0 kudos
- 4102 Views
- 7 replies
- 0 kudos
Databricks apps - Volumes and Workspace - FileNotFound issues
I have a Databricks App I need to integrate with volumes using local python os functions. I've setup a simple test: def __init__(self, config: ObjectStoreConfig): self.config = config # Ensure our required paths are created ...
- 4102 Views
- 7 replies
- 0 kudos
- 0 kudos
I am also running into this issue. @Alberto_Umana can you advise on what to do here? Seems like a lot of people are having this issue
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
3 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
Api Calls
1 -
API Documentation
3 -
App
1 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
6 -
Azure data disk
1 -
Azure databricks
15 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
BI Integrations
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
2 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineer Associate
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Data Processing
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks App
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
4 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
4 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
Hubert Dudek
2 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Learning
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
3 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
meetup
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Monitoring
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
OpenAI
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Software Development
1 -
Spark Connect
1 -
Spark scala
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
2 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
| User | Count |
|---|---|
| 133 | |
| 120 | |
| 57 | |
| 42 | |
| 35 |