- 9297 Views
- 2 replies
- 0 kudos
How to detect if running in a workflow job?
Hi there,what's the best way to differentiate in what environment my Spark session is running? Locally I develop with databricks-connect's DatabricksSession, but that doesn't work when running a workflow job which requires SparkSession.getOrCreate()....
- 9297 Views
- 2 replies
- 0 kudos
- 0 kudos
import json def get_job_context(): """Retrieve job-related context from the current Databricks notebook.""" # Retrieve the notebook context ctx = dbutils.notebook.entry_point.getDbutils().notebook().getContext() # Convert the context...
- 0 kudos
- 791 Views
- 1 replies
- 0 kudos
Help Needed: Executor Lost Error in Multi-Node Distributed Training with PyTorch
Hi everyone,I'm currently working on distributed training of a PyTorch model, following the example provided here. The training runs perfectly on a single node with a single GPU. However, when I attempt multi-node training using the following configu...
- 791 Views
- 1 replies
- 0 kudos
- 0 kudos
We do not recommend using spot instances with distributed ML training workloads that use barrier mode, like TorchDistributor as these workloads are extremely sensitive to executor loss. Please disable spot/pre-emption and try again.
- 0 kudos
- 4474 Views
- 2 replies
- 0 kudos
cannot create external location: invalid Databricks Workspace configuration
HI AllI am trying to create databricks storage credentials , external location and catalog with terraform.cloud : AzureMy storage credentials code is working correctly . But the external location code is throwing below error when executing the Terraf...
- 4474 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @manoj_2355ca , I am also facing the same error, did you get the solution for it?
- 0 kudos
- 5088 Views
- 5 replies
- 0 kudos
typing extensions import match error
I am trying to install the stanza library and try to create a udf function to create NER tags for my chunk_text in the dataframe.Cluster Config: DBR 14.3 LTS SPARK 3.5.0 SCALA 2.12below code:def extract_entities(text import stanza nlp = stanza....
- 5088 Views
- 5 replies
- 0 kudos
- 0 kudos
@SaadhikaB Hi, when I run dbutils.library.restartPython(), I get the following error
- 0 kudos
- 3081 Views
- 0 replies
- 0 kudos
PYTEST: Module not found error
Hi,Apologies, as I am trying to use Pytest first time. I know this question has been raised but I went through previous answers but the issue still exists.I am following DAtabricks and other articles using pytest. My structure is simple as -tests--co...
- 3081 Views
- 0 replies
- 0 kudos
- 8445 Views
- 4 replies
- 0 kudos
Resolved! What version of Python is used for the 16.1 runtime
I'm trying to create a spark udf for a registered model and getting:Exception: Python versions in the Spark Connect client and server are different. To execute user-defined functions, client and server should have the same minor Python version. Pleas...
- 8445 Views
- 4 replies
- 0 kudos
- 0 kudos
Does this mean that:1. A new dbx runtime comes out2. Serverless compute automatically switches to the new runtime + new python version3. Any external environments that use serverless ie, local VScode / CICD environments also need to upgrade their pyt...
- 0 kudos
- 625 Views
- 1 replies
- 0 kudos
Lakehouse monitoring metrices tables not created automatically.
Hello,I have an external table created in databricks unity catalog workspace and trying to "Create a monitor" for the same from quality tab.While creating the same the dashboard is getting created however the two metrices tables "profile" & "drift" a...
- 625 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @nikhil_2212! It looks like this post duplicates the one you recently posted. A response has already been provided to the Original post. I recommend continuing the discussion in that thread to keep the conversation focused and organised.
- 0 kudos
- 626 Views
- 1 replies
- 0 kudos
Stream processing large number of JSON files and handling exception
application writes several JSON (small) files and the expected volumes of these files are high ( Estimate: 1 million during the peak season in a hourly window) . As per current design, these files are streamed through Spark Stream and we use autolo...
- 626 Views
- 1 replies
- 0 kudos
- 0 kudos
We have customers that read millions of files per hour+ using Databricks Auto Loader. For high-volume use cases, we recommend enabling file notification mode, which, instead of continuously performing list operations on the filesystem, uses cloud nat...
- 0 kudos
- 943 Views
- 1 replies
- 0 kudos
Urgent: Need Authentication Reset for Databricks Workspace Access
I am unable to access my Databricks workspace because it is still redirecting to Microsoft Entra ID (Azure AD) authentication, even after I have removed the Azure AD enterprise application and changed the AWS IAM Identity Center settings.Issue Detail...
- 943 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @Pooviond! Please submit a ticket with the Databricks Support team for assistance in resolving this issue.
- 0 kudos
- 3720 Views
- 4 replies
- 1 kudos
Resolved! How best to measure the time-spent-waiting-for-an-instance?
I'm exploring using an instance pool. Can someone clarify for me which job event log tells me the time-spent-waiting-for-an-instance? I've found 2 candidates:1. The delta between "waitingForCluster" and "started" on the "run events" log, accessible v...
- 3720 Views
- 4 replies
- 1 kudos
- 1230 Views
- 2 replies
- 1 kudos
Resolved! When is it time to change from ETL in notebooks to whl/py?
Hi!I would like some input/tips from the community regarding when is it time to go from a working solution in notebooks to something more "stable", like whl/py-files?What are the pros/cons with notebooks compared to whl/py?The way i structured things...
- 1230 Views
- 2 replies
- 1 kudos
- 1 kudos
Hey @Forssen ,My advice:Using .py files and .whl packages is generally more secure and scalable, especially when working in a team. One of the key advantages is that code reviews and version control are much more efficient with .py files, as changes ...
- 1 kudos
- 8450 Views
- 7 replies
- 2 kudos
Resolved! Move multiple notebooks at the same time (programmatically)
If I want to move multiple (hundreds of) notebooks at the same time from one folder to another, what is the best way to do that? Other than going to each individual notebook and clicking "Move".Is there a way to programmatically move notebooks? Like ...
- 8450 Views
- 7 replies
- 2 kudos
- 2 kudos
You can use the export and import API calls in order to export this notebook to your local machine and then import it to the new workspace.Export: https://docs.databricks.com/api/workspace/workspace/exportImport: https://docs.databricks.com/api/works...
- 2 kudos
- 2838 Views
- 1 replies
- 0 kudos
Resolved! Deduplication with rocksdb, should old state files be deleted manually (to manage storage size)?
Hi, I have following streaming setup:I want to remove duplicates in streaming.1) deduplication strategy is defined by two fields: extraction_timestamp and hash (row wise hash)2) watermark strategy: extraction_timestamp with "10 seconds" interval--> R...
- 2838 Views
- 1 replies
- 0 kudos
- 0 kudos
Found solution. https://kb.databricks.com/streaming/how-to-efficiently-manage-state-store-files-in-apache-spark-streaming-applications <-- these two parameters.
- 0 kudos
- 1432 Views
- 6 replies
- 2 kudos
Disable exiting current cell when moving around with keyboard arrows
Is there any way do disable exiting current cell when I move cursor around with arrows. When I press up arrow or down arrow it will exit the current cell and go to another cell. Can that functionally be disabled so when I hold up or down arrow key, c...
- 1432 Views
- 6 replies
- 2 kudos
- 2 kudos
Is there any place where I can put this as a request.
- 2 kudos
- 6463 Views
- 4 replies
- 0 kudos
Adding NFS storage as external volume (Unity)
Can anyone share experience (or point me to another reference) that describes how to configure Azure Blob storage which has NFS enabled as an external volume to Databricks ?I've succeeded in adding SMB storage to Databricks but (if I understand prope...
- 6463 Views
- 4 replies
- 0 kudos
- 0 kudos
hi @phguk could you share how you managed to create an external volume referencing to an azure fileshare ?are you using Unity catalog for this ? it was my understanding this is not possible.
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
1 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
API Documentation
3 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
2 -
Auto-loader
1 -
Autoloader
4 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
5 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Delta Lake
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Community Edition
3 -
Community Event
1 -
Community Group
1 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
2 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
2 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta
22 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
2 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Migration
1 -
ML Model
1 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
Session
1 -
Sign Up Issues
2 -
Spark
3 -
Spark Connect
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
User | Count |
---|---|
133 | |
90 | |
42 | |
42 | |
30 |