- 1032 Views
- 1 replies
- 2 kudos
Accelerating discovery on Unity Catalog with a revamped Catalog Explore
Discover favorite and recent UC assets in Quick Access. You'll experience a simplified navigation with the gear icon (top left) for compute, storage, credentials, connections, DBFS, and managements features. Delta Sharing, Clean Rooms, and External D...
- 1032 Views
- 1 replies
- 2 kudos
- 2 kudos
Thank you for sharing this update on the Unity Catalog! @Ajay-Pandey Appreciate the detailed overview!
- 2 kudos
- 7805 Views
- 1 replies
- 5 kudos
Configuring DNS resolution for Private Databricks Workspaces (AWS)
Intro For customers on the E2 Platform, Databricks has a feature that allows them to use AWS PrivateLink to provision secure private workspaces by creating VPC endpoints to both the front-end and back-end interfaces of the Databricks infrastructure. ...
- 7805 Views
- 1 replies
- 5 kudos
- 4036 Views
- 0 replies
- 2 kudos
CICD for databricks workflow jobs
This post is to set up Databricks workflow jobs as a CI/CD. Below are the two essential components needed for a complete CI/CD setup of workflow jobs. Databricks Asset Bundles(DABs)AzureDevOps pipeline. Databricks Asset Bundle ( From local terminal )...
- 4036 Views
- 0 replies
- 2 kudos
- 3333 Views
- 1 replies
- 3 kudos
Resolved! RamK - Certification Update
Hi Team,My name is Ram based out of Singapore. I am new to this Community . Recently I have completed my certification in Databricks starting from Data Analyst , Data Engineering and Gen AI. Looking forward to get connected in serving the Data and AI...
- 3333 Views
- 1 replies
- 3 kudos
- 2395 Views
- 0 replies
- 1 kudos
Free Databricks Professional Data Engineer Practice Tests
Hi All,I came across a very good set of Practice tests on Databricks Professional Data Engineer Certification.For time being It is being given for free by instructor as promotional activity . Enroll if you are planning to go for the certificationhttp...
- 2395 Views
- 0 replies
- 1 kudos
- 1734 Views
- 0 replies
- 1 kudos
How to deal with Slow Jobs?
Definitely configure job timeouts, and configure notifications. This will help you to identify slowness due to various factors. It is crucial to also investigate and fix the issue causing the slowness. The first step is to identify the problem. This ...
- 1734 Views
- 0 replies
- 1 kudos
- 1653 Views
- 0 replies
- 0 kudos
Monitoring a Streaming Job
If you have a streaming job, you need to check the batch metrics to be able to understand the stream progress. However, here are some other suggestions which we can use to monitor a streaming job and be stuck in a "hung" state. Streaming Listeners sp...
- 1653 Views
- 0 replies
- 0 kudos
- 1775 Views
- 0 replies
- 0 kudos
Why configure a job timeout?
If you use Databricks Jobs for your workloads, it is possible you might have run into a situation where you find your jobs to be in "hung" state. Before cancelling the job it is important to collect the thread dump as I described here to be able to f...
- 1775 Views
- 0 replies
- 0 kudos
- 2129 Views
- 1 replies
- 0 kudos
A handy tool called spark-column-analyser
I just wanted to share a tool I built called spark-column-analyzer. It's a Python package that helps you dig into your Spark DataFrames with ease.Ever spend ages figuring out what's going on in your columns? Like, how many null values are there, or h...
- 2129 Views
- 1 replies
- 0 kudos
- 0 kudos
An example added to README in GitHubDoing analysis for column PostcodeJson formatted output{"Postcode": {"exists": true,"num_rows": 93348,"data_type": "string","null_count": 21921,"null_percentage": 23.48,"distinct_count": 38726,"distinct_percentage"...
- 0 kudos
- 1500 Views
- 0 replies
- 2 kudos
Schema evolution clause added to SQL merge syntax
You can now add the WITH SCHEMA EVOLUTION clause to a SQL merge statement to enable schema evolution for the operation. For more information: https://docs.databricks.com/en/delta/update-schema.html#sql-evo #Databricks
- 1500 Views
- 0 replies
- 2 kudos
- 1427 Views
- 0 replies
- 2 kudos
VariantType + Parse_json()
In Spark 4.0, there are no more data type mismatches when converting dynamic JSONs, as the new data type VariantType comes with a new function to parse JSONs. Stay tuned for 4.0 release.
- 1427 Views
- 0 replies
- 2 kudos
- 2146 Views
- 0 replies
- 1 kudos
Type widening is in Public Preview
You can now enable type widening on tables backed by Delta Lake. Tables with type widening enabled allow changing the type of columns to a wider data type without rewriting underlying data files. For more information:https://docs.databricks.co...
- 2146 Views
- 0 replies
- 1 kudos
- 1751 Views
- 1 replies
- 0 kudos
How to convert txt files to delta tables
Hello members of Databricks's comunity,I am currently working on a project where we collect data from machines, that data is in .txt format. The data is currently in an Azure container, I need to clean the files and convert them to delta tables, how ...
- 1751 Views
- 1 replies
- 0 kudos
- 0 kudos
https://docs.databricks.com/en/ingestion/add-data/upload-data.html
- 0 kudos
- 855 Views
- 0 replies
- 0 kudos
RocksDB for storing state stream
Now, you can keep the state of stateful streaming in RocksDB. For example, retrieving keys from memory to check for duplicate records inside the watermark is now faster. #databricks
- 855 Views
- 0 replies
- 0 kudos
- 916 Views
- 0 replies
- 1 kudos
State of stateful streaming
For stateful streaming in #databricks, you can now easily read what is in the state.
- 916 Views
- 0 replies
- 1 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access Data
1 -
ADF Linked Service
1 -
ADF Pipeline
1 -
Advanced Data Engineering
3 -
AI Agents
1 -
AI Readiness
1 -
Apache spark
1 -
ApacheSpark
1 -
Associate Certification
1 -
Automation
1 -
AWSDatabricksCluster
1 -
Azure
1 -
Azure databricks
3 -
Azure devops integration
1 -
AzureDatabricks
2 -
Big data
1 -
Billing and Cost Management
1 -
Blog
1 -
Caching
2 -
CICDForDatabricksWorkflows
1 -
Cluster
1 -
Cluster Policies
1 -
Cluster Pools
1 -
Community Event
1 -
Cost Optimization Effort
1 -
CostOptimization
1 -
custom compute policy
1 -
CustomLibrary
1 -
Data
1 -
Data Analysis with Databricks
1 -
Data Engineering
4 -
Data Governance
1 -
Data Mesh
1 -
Data Processing
1 -
Databricks Assistant
1 -
Databricks Community
1 -
Databricks Dashboard
1 -
Databricks Delta Table
1 -
Databricks Demo Center
1 -
Databricks Job
1 -
Databricks Migration
2 -
Databricks Mlflow
1 -
Databricks Notebooks
1 -
Databricks Support
1 -
Databricks Unity Catalog
2 -
Databricks Workflows
1 -
DatabricksML
1 -
DBR Versions
1 -
Declartive Pipelines
1 -
DeepLearning
1 -
Delta Lake
1 -
Delta Live Table
1 -
Delta Live Tables
1 -
Delta Time Travel
1 -
Devops
1 -
DimensionTables
1 -
DLT
2 -
DLT Pipelines
3 -
DLT-Meta
1 -
Dns
1 -
Dynamic
1 -
Free Databricks
3 -
GenAI agent
1 -
GenAI and LLMs
2 -
GenAIGeneration AI
1 -
Generative AI
1 -
Genie
1 -
Governance
1 -
Hive metastore
1 -
Hubert Dudek
1 -
Lakeflow Pipelines
1 -
Lakehouse
1 -
Lakehouse Migration
1 -
Lazy Evaluation
1 -
Learning
1 -
Library Installation
1 -
Llama
1 -
Medallion Architecture
1 -
Migrations
1 -
MSExcel
2 -
Multiagent
1 -
Networking
2 -
Partner
1 -
Performance
1 -
Performance Tuning
1 -
Private Link
1 -
Pyspark
1 -
Pyspark Code
1 -
Pyspark Databricks
1 -
Pytest
1 -
Python
1 -
Reading-excel
1 -
Scala Code
1 -
Scripting
1 -
SDK
1 -
Serverless
2 -
Spark
2 -
Spark Caching
1 -
SparkSQL
1 -
SQL
1 -
SQL Serverless
1 -
Support Ticket
1 -
Sync
1 -
Tutorial
1 -
Unit Test
1 -
Unity Catalog
4 -
Unity Catlog
1 -
Warehousing
1 -
Workflow Jobs
1 -
Workflows
3
- « Previous
- Next »
| User | Count |
|---|---|
| 71 | |
| 43 | |
| 38 | |
| 30 | |
| 23 |