- 8584 Views
- 3 replies
- 3 kudos
Resolved! DLT Job Clusters: Continuous vs Triggered Cluster Start Times
Hi there,I'm curious if anyone is able to definitively help me answer how DLT Job Clusters operate/run.For example, the following is my baseline understanding of DLT Job Clusters. If I run a Triggered DLT Pipeline (e.g. daily) the job cluster takes m...
- 8584 Views
- 3 replies
- 3 kudos
- 3 kudos
Ideally one would expect clusters used for DLT pipeline to terminate after the pipeline execution has finished. However, while running in `development` environment, you'll notice it doesn't terminate on its own, whereas in `production` it terminates ...
- 3 kudos
- 2318 Views
- 1 replies
- 0 kudos
Can I update a table comment using REST API?
https://docs.databricks.com/api/workspace/tablesIt seems I could only list/delete tables, is there a way to update a table's metadata like comment or detail fields by REST API?
- 2318 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @al2co33 , We don't currently provide any APIs for updating table comments, however you can utilize the SQL Statement Execution API to do it. You can use the following tutorial to ALTER TABLE/COLUMN COMMENT. https://learn.microsoft.com/en-us/azure...
- 0 kudos
- 3507 Views
- 1 replies
- 0 kudos
Databricks Java - Create Jar in Java 11
I am trying to a run simple print java program which is not working and getting compilation version issues though i changed the environment variable points to java 11. Can you please help me ? Can we create java with spark session and execute as a ja...
- 3507 Views
- 1 replies
- 0 kudos
- 0 kudos
@Databricks_Java You can run command like this: spark-submit --class com.test.Main example.jarand make sure to check the java version and match with the DBR compatibility
- 0 kudos
- 1948 Views
- 1 replies
- 0 kudos
Disable personal compute with the Databricks API or UI
For a production environment, I want to disable the personal compute policy, because I do not want that all users can create personal compute clusters in production. Unfortunately, I am not able to access the account console, so I want to revoke perm...
- 1948 Views
- 1 replies
- 0 kudos
- 4402 Views
- 1 replies
- 0 kudos
No space left on device and IllegalStateException: Have already allocated a maximum of 8192 pages
Hello, I'm writing to bring to your attention an issue that we have encountered while working with Data bricks and seek your assistance in resolving it.Context of the Error : When a sql query(1700 lines) is ran, corresponding data bricks job is faili...
- 4402 Views
- 1 replies
- 0 kudos
- 0 kudos
are processing Parquet files or what is the format of your tables? can you split your sql query instead of having a huge query with 1700 lines
- 0 kudos
- 2208 Views
- 3 replies
- 0 kudos
Autoloader file latency
Hi Team,I would like to understand if there is a metadata table for the autoloader in Databricks that captures information about file arrival and processing.The reason we are experiencing data issues is because our table A receives hundreds of files ...
- 2208 Views
- 3 replies
- 0 kudos
- 0 kudos
Check with cloud_files_state() API You can find examples here https://docs.databricks.com/en/ingestion/auto-loader/production.html#querying-files-discovered-by-auto-loader
- 0 kudos
- 2435 Views
- 2 replies
- 2 kudos
Resolved! Regarding cloning my gitrepo under workspace/Users/user_name
Hi all,I am recently started using databricks. I want to my git repo under workspace/Users/user_name path which I can't able to do it. But i can able to clone only under repo directory by default.Can anyone pls advice me regarding this Thank you
- 2435 Views
- 2 replies
- 2 kudos
- 4004 Views
- 4 replies
- 0 kudos
Connect my spark code running in AWS ECS to databricks cluster
Hi team, I wanted to know if there is a way to connect a piece of my pyspark code running in ECS to Databricks cluster and leverage the databricks compute using Databricks connect?I see Databricks connect is for connecting local ide code to databrick...
- 4004 Views
- 4 replies
- 0 kudos
- 0 kudos
Noted @Retired_mod @RonDeFreitas. I am currently using Databricks runtime v12.2 (which is < v13.0). I followed this doc (Databricks Connect for Databricks Runtime 12.2 LTS and below) and connected my local terminal to Databricks cluster and was able ...
- 0 kudos
- 5207 Views
- 2 replies
- 0 kudos
Resolved! spark context in databricks
Hi @all,In Azure Databricks,I am using structured streaming for each batch functionality, in one of the functions I am creating tempview with pyspark dataframe (*Not GlobalTempView) and trying to access the same temp view by using spark.sql functiona...
- 5207 Views
- 2 replies
- 0 kudos
- 0 kudos
Do you face this issue without spark streaming as well? Also, could you share a minimal repo code preferably without streaming?
- 0 kudos
- 886 Views
- 0 replies
- 0 kudos
Databricks & Bigquery
Databricks is packaging a old version of big-query jar(Databricks also repackaged and created a fat jar), and our application needs a latest jar. Now the latest jar depends on spark-bigquery-connector.properties file for a property scala.binary.vers...
- 886 Views
- 0 replies
- 0 kudos
- 2020 Views
- 1 replies
- 0 kudos
Unity catalog internal error - quality monitoring
I try to get my head around the quality monitoring functionality in Unity Catalog. I configured one of the tables in our unity catalog. My assumption is that the profile and drift metrics tables are automatically created. But when I get an internal e...
- 2020 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi, were you able to resolve this, am having a similar issue - thanks
- 0 kudos
- 2160 Views
- 1 replies
- 0 kudos
Generate Excel for a SQL query
Greetings,I am using a Java Spring boot application that is supposed to respond with an excel based on request. My current approach involves reading data using jdbc drivers, storing them in appropriate data structures, writing them to an excel which ...
- 2160 Views
- 1 replies
- 0 kudos
- 0 kudos
Thanks for putting this together @Retired_mod ,I see that this approach will help to generate an excel after receiving the data from data bricks in the form of resultSet which has to be parsed.I believe this approach is the appropriate way to generat...
- 0 kudos
- 4886 Views
- 0 replies
- 0 kudos
Error authenticating databricks.sdk.WorkspaceClient with external workspace via Azure Native Auth
I am referencing this doc to initialize a databricks.sdk.WorkspaceClient object instance via Azure Native Authentication. I am initializing this WorkspaceClient within a databricks notebook, but I am trying to use the client to access the Jobs api of...
- 4886 Views
- 0 replies
- 0 kudos
- 2077 Views
- 1 replies
- 0 kudos
Data masking best practices
Hi Team,Could you please suggest any best practices/blogs on implementing data masking, row level ,column level ,access control, role-based access control (RBAC), and attribute-based access control (ABAC)? Regards.Phanindra
- 2077 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi, Can you check if this document answers your question: https://www.databricks.com/blog/2020/11/20/enforcing-column-level-encryption-and-avoiding-data-duplication-with-pii.html
- 0 kudos
- 7063 Views
- 0 replies
- 3 kudos
Unity Catalog Governance Value Levers
What makes Unity Catalog a game-changer? The blog intricately dissects five main value levers: mitigating data and architectural risks, ensuring compliance, accelerating innovation, reducing platform complexity and costs while improving operational e...
- 7063 Views
- 0 replies
- 3 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
1 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
API Documentation
3 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
5 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
3 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
2 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Spark Connect
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
User | Count |
---|---|
133 | |
112 | |
56 | |
42 | |
30 |