- 1455 Views
- 1 replies
- 0 kudos
Text alignment in databricks dashboard markdown
Hi All,How can I align the text inside the Dashboard markdown to the middle?Is there an option to do this?Thanks,Gal
- 1455 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @Gal_Sb! Databricks markdown does not support text alignment, and HTML/CSS do not work for this purpose in Databricks dashboards. You can try formatting options like headers or spacing adjustments. I'll also check with the team to explore possi...
- 0 kudos
- 1568 Views
- 3 replies
- 1 kudos
Resolved! DLT Pipeline Validate will always spawn new cluster
Hi all!I've started learning DLT-Pipelines but I am struggling with the development of a pipeline.As far as I understand it, once I click on “Validate” a cluster will spin-up and stay (by default for 2hours), if the pipeline is in “Development” mode....
- 1568 Views
- 3 replies
- 1 kudos
- 1 kudos
Well, turns out if I do not make any changes to the cluster settings when creating a new pipeline (i.e. keep default) it works as expected (every new "validate" skips the "waiting for resources"-step).Initially, I reduced the number of workers to a m...
- 1 kudos
- 1422 Views
- 4 replies
- 0 kudos
DLT refresh time for combination of streaming and non streaming tables?
@dlt.tabledef joined_table(): dim_df = spark.read.table("dim_table") # Reloads every batch fact_df = spark.readStream.table("fact_stream") return fact_df.join(dim_df, "id", "left")
- 1422 Views
- 4 replies
- 0 kudos
- 0 kudos
Hi,Current approach reloads dim_df in every batch, which can be inefficient. To optimize, consider broadcasting dim_df if it's small or using a mapGroupsWithState function for stateful joins. Also, ensure that fact_df has sufficient watermarking to h...
- 0 kudos
- 9949 Views
- 2 replies
- 0 kudos
How to detect if running in a workflow job?
Hi there,what's the best way to differentiate in what environment my Spark session is running? Locally I develop with databricks-connect's DatabricksSession, but that doesn't work when running a workflow job which requires SparkSession.getOrCreate()....
- 9949 Views
- 2 replies
- 0 kudos
- 0 kudos
import json def get_job_context(): """Retrieve job-related context from the current Databricks notebook.""" # Retrieve the notebook context ctx = dbutils.notebook.entry_point.getDbutils().notebook().getContext() # Convert the context...
- 0 kudos
- 1239 Views
- 1 replies
- 0 kudos
Help Needed: Executor Lost Error in Multi-Node Distributed Training with PyTorch
Hi everyone,I'm currently working on distributed training of a PyTorch model, following the example provided here. The training runs perfectly on a single node with a single GPU. However, when I attempt multi-node training using the following configu...
- 1239 Views
- 1 replies
- 0 kudos
- 0 kudos
We do not recommend using spot instances with distributed ML training workloads that use barrier mode, like TorchDistributor as these workloads are extremely sensitive to executor loss. Please disable spot/pre-emption and try again.
- 0 kudos
- 4874 Views
- 2 replies
- 0 kudos
cannot create external location: invalid Databricks Workspace configuration
HI AllI am trying to create databricks storage credentials , external location and catalog with terraform.cloud : AzureMy storage credentials code is working correctly . But the external location code is throwing below error when executing the Terraf...
- 4874 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @manoj_2355ca , I am also facing the same error, did you get the solution for it?
- 0 kudos
- 5669 Views
- 5 replies
- 0 kudos
typing extensions import match error
I am trying to install the stanza library and try to create a udf function to create NER tags for my chunk_text in the dataframe.Cluster Config: DBR 14.3 LTS SPARK 3.5.0 SCALA 2.12below code:def extract_entities(text import stanza nlp = stanza....
- 5669 Views
- 5 replies
- 0 kudos
- 0 kudos
@SaadhikaB Hi, when I run dbutils.library.restartPython(), I get the following error
- 0 kudos
- 9736 Views
- 4 replies
- 0 kudos
Resolved! What version of Python is used for the 16.1 runtime
I'm trying to create a spark udf for a registered model and getting:Exception: Python versions in the Spark Connect client and server are different. To execute user-defined functions, client and server should have the same minor Python version. Pleas...
- 9736 Views
- 4 replies
- 0 kudos
- 0 kudos
Does this mean that:1. A new dbx runtime comes out2. Serverless compute automatically switches to the new runtime + new python version3. Any external environments that use serverless ie, local VScode / CICD environments also need to upgrade their pyt...
- 0 kudos
- 975 Views
- 1 replies
- 0 kudos
Lakehouse monitoring metrices tables not created automatically.
Hello,I have an external table created in databricks unity catalog workspace and trying to "Create a monitor" for the same from quality tab.While creating the same the dashboard is getting created however the two metrices tables "profile" & "drift" a...
- 975 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @nikhil_2212! It looks like this post duplicates the one you recently posted. A response has already been provided to the Original post. I recommend continuing the discussion in that thread to keep the conversation focused and organised.
- 0 kudos
- 778 Views
- 1 replies
- 0 kudos
Stream processing large number of JSON files and handling exception
application writes several JSON (small) files and the expected volumes of these files are high ( Estimate: 1 million during the peak season in a hourly window) . As per current design, these files are streamed through Spark Stream and we use autolo...
- 778 Views
- 1 replies
- 0 kudos
- 0 kudos
We have customers that read millions of files per hour+ using Databricks Auto Loader. For high-volume use cases, we recommend enabling file notification mode, which, instead of continuously performing list operations on the filesystem, uses cloud nat...
- 0 kudos
- 1132 Views
- 1 replies
- 0 kudos
Urgent: Need Authentication Reset for Databricks Workspace Access
I am unable to access my Databricks workspace because it is still redirecting to Microsoft Entra ID (Azure AD) authentication, even after I have removed the Azure AD enterprise application and changed the AWS IAM Identity Center settings.Issue Detail...
- 1132 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @Pooviond! Please submit a ticket with the Databricks Support team for assistance in resolving this issue.
- 0 kudos
- 4539 Views
- 4 replies
- 1 kudos
Resolved! How best to measure the time-spent-waiting-for-an-instance?
I'm exploring using an instance pool. Can someone clarify for me which job event log tells me the time-spent-waiting-for-an-instance? I've found 2 candidates:1. The delta between "waitingForCluster" and "started" on the "run events" log, accessible v...
- 4539 Views
- 4 replies
- 1 kudos
- 1632 Views
- 2 replies
- 1 kudos
Resolved! When is it time to change from ETL in notebooks to whl/py?
Hi!I would like some input/tips from the community regarding when is it time to go from a working solution in notebooks to something more "stable", like whl/py-files?What are the pros/cons with notebooks compared to whl/py?The way i structured things...
- 1632 Views
- 2 replies
- 1 kudos
- 1 kudos
Hey @Forssen ,My advice:Using .py files and .whl packages is generally more secure and scalable, especially when working in a team. One of the key advantages is that code reviews and version control are much more efficient with .py files, as changes ...
- 1 kudos
- 9210 Views
- 7 replies
- 2 kudos
Resolved! Move multiple notebooks at the same time (programmatically)
If I want to move multiple (hundreds of) notebooks at the same time from one folder to another, what is the best way to do that? Other than going to each individual notebook and clicking "Move".Is there a way to programmatically move notebooks? Like ...
- 9210 Views
- 7 replies
- 2 kudos
- 2 kudos
You can use the export and import API calls in order to export this notebook to your local machine and then import it to the new workspace.Export: https://docs.databricks.com/api/workspace/workspace/exportImport: https://docs.databricks.com/api/works...
- 2 kudos
- 3559 Views
- 1 replies
- 0 kudos
Resolved! Deduplication with rocksdb, should old state files be deleted manually (to manage storage size)?
Hi, I have following streaming setup:I want to remove duplicates in streaming.1) deduplication strategy is defined by two fields: extraction_timestamp and hash (row wise hash)2) watermark strategy: extraction_timestamp with "10 seconds" interval--> R...
- 3559 Views
- 1 replies
- 0 kudos
- 0 kudos
Found solution. https://kb.databricks.com/streaming/how-to-efficiently-manage-state-store-files-in-apache-spark-streaming-applications <-- these two parameters.
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
3 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
Api Calls
1 -
API Documentation
3 -
App
1 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
6 -
Azure data disk
1 -
Azure databricks
15 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
BI Integrations
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
2 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineer Associate
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Data Processing
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks App
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
4 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
4 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
Hubert Dudek
2 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Learning
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
3 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
meetup
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Monitoring
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
OpenAI
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Software Development
1 -
Spark Connect
1 -
Spark scala
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
2 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
| User | Count |
|---|---|
| 133 | |
| 120 | |
| 57 | |
| 42 | |
| 35 |