- 2653 Views
- 1 replies
- 0 kudos
Databricks JDBC driver attempting to pass access token (JWT) from an external IDP (OKTA)
Configured a Databricks workspace for SSO to an IDP (OKTA).Databricks JDBC driver 02.06.38.1068Attempting to connect to Databricks using a URL similar to following where access token is obtained from the IDP (OKTA). Using tool such as SQLSquirrel wit...
- 2653 Views
- 1 replies
- 0 kudos
- 0 kudos
Yes, latest version 02.06.38.1068Would like to know if other persons have successfully passed access tokens from an external IDP via the driver.
- 0 kudos
- 2700 Views
- 0 replies
- 0 kudos
Databricks vs. Competitors: Key Features
Hi everyone,Can anyone share insights on the key features that differentiate Databricks from its competitors? Looking forward to your thoughts!Thanks!
- 2700 Views
- 0 replies
- 0 kudos
- 5480 Views
- 4 replies
- 1 kudos
run datarbicks worflow as service pricipal (managed identity) reads from azure dev ops repo Failed
Hello,we are running a workflow as a service principal, that is a aad managed identity. This does result in the issue: run databricks workflow as service principal the reads from azure dev ops repo Failed to checkout Git repository: PERMISSION_DENIED...
- 5480 Views
- 4 replies
- 1 kudos
- 1 kudos
We managed to solve this problem, however it is not an elegant solution. Databricks should simplify this.The steps that have to be done are listed below. We are using user assigned managed identity (MI), but I assume this should work for Azure Servic...
- 1 kudos
- 1955 Views
- 1 replies
- 0 kudos
When are DLT going to support multiple targets
Due to the limitations with all output data needing to be stored in one target we have stopped using DLT until more flexibility is added. If anyone has a workaround we are open to suggestions.
- 1955 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi Zavi,One potential workaround is to establish multiple DLT pipelines, with each pipeline specifically configured to point to a unique target. This approach effectively allows for a diverse range of output data to be stored across various targets.T...
- 0 kudos
- 4717 Views
- 1 replies
- 0 kudos
Databricks Custom model Serving endpoint Failing
Hello all, I have created a custom model serving endpoint in Azure databricks. This endpoint connects with the AzureopenAI model and Azure postgres connection.All of these Azure services are with Private endpoints. When I run this notebook,I am able ...
- 4717 Views
- 1 replies
- 0 kudos
- 1387 Views
- 1 replies
- 0 kudos
VSCode Databricks Extension
Hi all,I've been trying to sync my VSCode IDE with our Databricks GCP workspace using the Databricks extension. I am able to connect authenticate my account and workspace and find our clusters. However, when I try to sync a destination it throws a st...
- 1387 Views
- 1 replies
- 0 kudos
- 0 kudos
@Retired_mod thanks for you response.I am not running through a proxy. At least, not on purpose. How do I know if I am running through a proxy? And where can I find the values of <proxy_url> and <port> so that I can try restarting my VSCode.I have tr...
- 0 kudos
- 24632 Views
- 0 replies
- 0 kudos
Identity column has null values
I created a table in databricks using a dbt model pre hook CREATE TABLE IF NOT EXISTS accounts ( account_id BIGINT GENERATED ALWAYS AS IDENTITY, description STRING other columns)I use the same dbt model to merge values into this table in the post...
- 24632 Views
- 0 replies
- 0 kudos
- 1755 Views
- 0 replies
- 0 kudos
UCX Installation Error
While downloading and installing ucx from a shell code,I am facing the below error. Can anyone provide a solution[i] Creating isolated Virtualenv with Python: /c/Program Files/Python312/pythonActual environment location may have moved due to redirect...
- 1755 Views
- 0 replies
- 0 kudos
- 1217 Views
- 0 replies
- 0 kudos
Fetching CPU and memory data using REST APIs
Hi,I am trying to fetch CPU and memory details from Databricks. Are there any APIs present to which I can connect using postman and fetch these details?
- 1217 Views
- 0 replies
- 0 kudos
- 6312 Views
- 3 replies
- 1 kudos
Setting up Unity Catalog in Azure
Trying to create a metastore that will be connected to an external storage (ADLS) but we don't have the option to create a new metastore in 'Catalog' tab in the UI. Based on some research, we see that we'll have to go into "Manage Account" and then c...
- 6312 Views
- 3 replies
- 1 kudos
- 1 kudos
I have been wrestling with this question for days now. I seem to be the only one with this question so I am sure I am doing something wrong. I am trying to create a UC metastore but there is not an option in "Catalog" to create a metastore. This s...
- 1 kudos
- 1468 Views
- 0 replies
- 0 kudos
Failed deploying bundle via gitlab - Request failed for POST
I'm encountering an issue in my .gitlab-ci.yml file when attempting to execute databricks bundle deploy -t prod. The error message I receive is: Error: Request failed for POST <path>/state/deploy.lockInterestingly, when I run the same command locally...
- 1468 Views
- 0 replies
- 0 kudos
- 2335 Views
- 2 replies
- 1 kudos
Data in dataframe is also getting deleted when we are trying to delete records from underlying table
Hi , We are trying to load data from a delta table to a dataframe(a copy of original table) . Initially delta table has count 911 . The dataframe in which the data is loaded also has the same count .Now, we are deleting some records from the delta...
- 2335 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi, There is a way to retain the copy of data frame, even if the data in underling table is manipulated but that's a memory expensive operation, be careful while using it.df1 = spark.createDataFrame(df.rdd.map(lambda x: x), schema=df.schema)Here we a...
- 1 kudos
- 21072 Views
- 6 replies
- 0 kudos
Renaming the database Name in Databricks
Team,Initially our team created the databases with the environment name appended. Ex: cust_dev, cust_qa, cust_prod.I am looking to standardize the database name as consistent name across environments. I want to rename to "cust". All of my tables are ...
- 21072 Views
- 6 replies
- 0 kudos
- 0 kudos
You can also use “CASCADE” to drop schema and tables as well. It is recursive.
- 0 kudos
- 1045 Views
- 0 replies
- 0 kudos
How to confirm a workspace ID via an api token?
Hello! We are integrating with Databricks and we get the API key, workspace ID, and host from our users in order to connect to Databricks. We need the to validate the workspace ID because we do need it outside of the context of the API key (with webh...
- 1045 Views
- 0 replies
- 0 kudos
- 1207 Views
- 1 replies
- 0 kudos
Custom python package iin Notebook task using bundle
Hi mates!I'n my company, we are moving our pipelines to Databricks bundles, our pipelines use a notebook that receives some parameters.This notebook uses a custom python package to apply the business logic based on the parameters that receive.The thi...
- 1207 Views
- 1 replies
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
3 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
2 -
AI
4 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
Api Calls
1 -
API Documentation
3 -
App
2 -
Application
1 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
3 -
Auto-loader
1 -
Autoloader
4 -
Aws databricks
1 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
6 -
Azure data disk
1 -
Azure databricks
15 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Unity Catalog
6 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
2 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Cluster Init Script
1 -
Comments
1 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
Custom Python
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineer Associate
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Data Processing
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks App
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
3 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
4 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks Model
1 -
Databricks notebook
2 -
Databricks Notebooks
4 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
databricks_cluster_policy
1 -
DatabricksJobCluster
1 -
DataCleanroom
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
Hubert Dudek
19 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Learning
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
3 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
meetup
1 -
Metadata
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Monitoring
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
OpenAI
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Sant
1 -
Schedule
1 -
Serverless
3 -
serving endpoint
1 -
Session
1 -
Sign Up Issues
2 -
Software Development
1 -
Spark Connect
1 -
Spark scala
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
3 -
terraform
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
| User | Count |
|---|---|
| 133 | |
| 128 | |
| 72 | |
| 57 | |
| 42 |