- 2578 Views
- 3 replies
- 2 kudos
Resolved! Changing GCP billing account
Hello we need to change the billing account associated with our Databricks subscription. Is there any documentation available describing the procedure to be followed ? ThanksHoratiu
- 2578 Views
- 3 replies
- 2 kudos
- 2 kudos
Start by logging into the Google Cloud Platform. If you are a new user, you need to create an account before you subscribe to Data bricks. Once in the console, start by selecting an existing Google Cloud project, or create a new project, and confirm ...
- 2 kudos
- 1005 Views
- 1 replies
- 0 kudos
Infrastructure question
We've noticed that the GKE worker nodes which are automatically created when Databricks workspace is created inside GCP project are using the default compute engine SA which's not the best security approach, even Google doesn't recommend using defaul...
- 1005 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 1006 Views
- 0 replies
- 0 kudos
How do I add static tag values in the aws databricks-multi-workspace.template.yaml
Hello Team, I have a databricks workspace running on an AWS environment. I have a requirement where the team wanted to add a few customized tags as per the docs I see below the recommendationTagValue:Description: All new AWS objects get a tag with t...
- 1006 Views
- 0 replies
- 0 kudos
- 3499 Views
- 1 replies
- 0 kudos
Recreating Unity Catalog object through different environments
Hi all! I am working on a DevOps project to automate the creation of UC objects through different environments (dev-test-prod). Each time we deploy our code to a different environment (using a Github workflow, not really relevant) we want to also cre...
- 3499 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 3863 Views
- 4 replies
- 2 kudos
CI/CD pipeline using Github
Hi Team,I've recently begun working with Databricks and I'm exploring options for setting up a CI/CD pipeline to pull the latest code from GitHub.I have to pull latest code(.sql) from Github whenever push is done to main branch and update .sql notebo...
- 3863 Views
- 4 replies
- 2 kudos
- 2 kudos
FWIW:we pull manually, but it is possible to automate that without any cost if you use Azure Devops. There is a free tier (depending on the number of pipelines/duration).
- 2 kudos
- 2767 Views
- 4 replies
- 1 kudos
Azure Databricks Workspace Editor - Cursor messed up - cannot edit code
I have been using the Azure Databricks Workspace Editor for a few weeks to put together a python script as well as a notebook.All was well, until yesterday evening. Since then I suddenly have the following issuethe cursor in the editor is misbehaving...
- 2767 Views
- 4 replies
- 1 kudos
- 1 kudos
Thanks @Chibberto - I will try the zoom level to see if it makes a difference.In the meantime, the latest issue is that the Autosave is not kicking in sometimes for several minutes. So, if I make a change and then re-run the job - the latest code is ...
- 1 kudos
- 1084 Views
- 1 replies
- 1 kudos
Resolved! Databricks Contact us form not working
I got some issues with Databricks online certification. I filed twice at (https://help.databricks.com/s/contact-us?ReqType=training), but did not get any confirmation emails.@Cert-Team
- 1084 Views
- 1 replies
- 1 kudos
- 1 kudos
@Kaniz @Cert-Team Finally figured out why my request got dropped silently - I included a link in the form. Please indicate no link in the form submission section. Thanks a lot.
- 1 kudos
- 1330 Views
- 0 replies
- 0 kudos
xgboost.spark.core' has no attribute 'SparkXGBClassifierModel'
I got error: xgboost.spark.core' has no attribute 'SparkXGBClassifierModel' when attempting to load model. I have upgraded to xgboost-2.0.0.
- 1330 Views
- 0 replies
- 0 kudos
- 1013 Views
- 0 replies
- 0 kudos
Using Databricks to build a sql server data warehouse
HelloI am new to Databricks.is Databricks a good tool to build a sql server data warehouse using azure?How does this compare to azure data factory?
- 1013 Views
- 0 replies
- 0 kudos
- 1311 Views
- 1 replies
- 1 kudos
Resolved! Beginner here. Which exam to do first?
I am new to Databricks and would like to learn and become certified. I have SQL knowledge.To get started, which exam should I do first so that I have a very good understanding of Databricks fundamentals and concepts?I was thinking of “Databricks Cert...
- 1311 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @mipayof346 The Data Engineer Associate certification exam assesses an individual’s ability to use the Databricks Lakehouse Platform to complete introductory data engineering tasks. This includes an understanding of the Lakehouse Platform and its ...
- 1 kudos
- 1036 Views
- 0 replies
- 0 kudos
Calculation on a dataframe
Hi, I need to do following calculations on a dataframe. It should be done for each period and calculated value will be used for next period's calculation. Adding sample data and formula from excel here. Thanks in advance for your help.Need to calcula...
- 1036 Views
- 0 replies
- 0 kudos
- 1081 Views
- 1 replies
- 0 kudos
New to databricks, which certification should I start?
I’m new to Databricks and don’t have any practical experience. I can write SQL code fluently. I’d like to get Databricks certified, which exam should I start to get a good understanding of the fundamentals?Also, what are the most important basic conc...
- 1081 Views
- 1 replies
- 0 kudos
- 0 kudos
@mipayof346 it's best to start with the Lakehouse Fundamentals Accreditation (free course, free assessment). Then I recommend that you move to the Data Analyst certification path. Details on both can be found here: https://www.databricks.com/learn/ce...
- 0 kudos
- 11699 Views
- 5 replies
- 1 kudos
Installed Library / Module not found through Databricks connect LST 12.2
Hi all,We recently upgraded our databricks compute cluster from runtime version 10.4 LST, to 12.2 LST.After the upgrade one of our python scripts suddenly fails with a module not found error; indicating that our customly created module "xml_parser" i...
- 11699 Views
- 5 replies
- 1 kudos
- 1 kudos
FYI: For now we have found a workaround.We are adding the package as ZIP file to the current spark session with .addyFiles.So after creating a spark session using Databricks-connect we run the following:spark.sparkContext.addPyFile("C:/path/to/custom...
- 1 kudos
- 1295 Views
- 1 replies
- 1 kudos
Resolved! Selective column loader unity catalog
I am loading a table into a data frame using df = spark.table(table_name) Is there a way to load only the required columns? The table has more than 50+ columns and I only need a handful of column.
- 1295 Views
- 1 replies
- 1 kudos
- 1 kudos
@vk217 Simply just use select function, ex.df = spark.read.table(table_name).select("col1", "col2", "col3")
- 1 kudos
- 1415 Views
- 1 replies
- 0 kudos
Reading tables from different databricks clusters
Hello,My organization uses two cluster for dev and Prod. We mount our azure blobs on to delta lake to store the delta tables. Prod has bunch of data and dev has limited data. I want to move the data from prod to dev for testing purposes. How can I do...
- 1415 Views
- 1 replies
- 0 kudos
- 0 kudos
It depends on the current setup, how your clusters are working right now and how your data is stored. One alternative could be mount the Dev storage to the Prod cluster and execute a DEEP CLONE (https://docs.databricks.com/en/sql/language-manual/delt...
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
12.2 LST
1 -
Access Data
2 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Analytics
1 -
Apache spark
1 -
API
2 -
API Documentation
2 -
Architecture
1 -
Auto-loader
1 -
Autoloader
2 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
2 -
Azure data disk
1 -
Azure databricks
10 -
Azure Databricks SQL
5 -
Azure databricks workspace
1 -
Azure Unity Catalog
4 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Best Practices
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Bronze Layer
1 -
Bug
1 -
Catalog
1 -
Certification
1 -
Certification Exam
1 -
Certification Voucher
1 -
CICD
2 -
Cli
1 -
Cloud_files_state
1 -
cloudera sql
1 -
CloudFiles
1 -
Cluster
3 -
clusterpolicy
1 -
Code
1 -
Community Group
1 -
Community Social
1 -
Compute
2 -
conditional tasks
1 -
Cost
2 -
Credentials
1 -
CustomLibrary
1 -
CustomPythonPackage
1 -
DABs
1 -
Data Engineering
2 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
DataAISummit2023
1 -
DatabrickHive
1 -
databricks
2 -
Databricks Academy
1 -
Databricks Alerts
1 -
Databricks Audit Logs
1 -
Databricks Certified Associate Developer for Apache Spark
1 -
Databricks Cluster
1 -
Databricks Clusters
1 -
Databricks Community
1 -
Databricks connect
1 -
Databricks Dashboard
1 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Documentation
1 -
Databricks JDBC
1 -
Databricks Job
1 -
Databricks jobs
2 -
Databricks Lakehouse Platform
1 -
Databricks notebook
1 -
Databricks Notebooks
2 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks SQL
1 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks UI
1 -
Databricks Unity Catalog
3 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
1 -
DatabricksJobCluster
1 -
DataDays
1 -
DataMasking
2 -
dbdemos
1 -
DBRuntime
1 -
DDL
1 -
deduplication
1 -
Delt Lake
1 -
Delta
12 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
6 -
Delta Sharing
2 -
deltaSharing
1 -
denodo
1 -
Deny assignment
1 -
Devops
1 -
DLT
9 -
DLT Pipeline
6 -
DLT Pipelines
5 -
DLTCluster
1 -
Documentation
2 -
Dolly
1 -
Download files
1 -
dropduplicatewithwatermark
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
1 -
Feature Store
1 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
Getting started
1 -
glob
1 -
Good Documentation
1 -
Google Bigquery
1 -
hdfs
1 -
Help
1 -
How to study Databricks
1 -
informatica
1 -
Jar
1 -
Java
1 -
JDBC Connector
1 -
Job Cluster
1 -
Job Task
1 -
Kubernetes
1 -
Lineage
1 -
LLMs
1 -
Login
1 -
Login Account
1 -
Machine Learning
1 -
MachineLearning
1 -
masking
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Metastore
1 -
MlFlow
2 -
Mlops
1 -
Model Serving
1 -
Model Training
1 -
Mount
1 -
Networking
1 -
nic
1 -
Okta
1 -
ooze
1 -
os
1 -
Password
1 -
Permission
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
1 -
policies
1 -
PostgresSQL
1 -
Pricing
1 -
pubsub
1 -
Pyspark
1 -
Python
2 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
RBAC
1 -
Repos Support
1 -
Reserved VM's
1 -
Reset
1 -
run a job
1 -
runif
1 -
S3
1 -
SAP SUCCESS FACTOR
1 -
Schedule
1 -
SCIM
1 -
Serverless
1 -
Service principal
1 -
Session
1 -
Sign Up Issues
2 -
Significant Performance Difference
1 -
Spark
2 -
sparkui
2 -
Splunk
1 -
sqoop
1 -
Start
1 -
Stateful Stream Processing
1 -
Storage Optimization
1 -
Structured Streaming ForeachBatch
1 -
suggestion
1 -
Summit23
2 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
tabrikck
1 -
Tags
1 -
Troubleshooting
1 -
ucx
2 -
Unity Catalog
1 -
Unity Catalog Error
2 -
Unity Catalog Metastore
1 -
UntiyCatalog
1 -
Update
1 -
user groups
1 -
Venicold
3 -
volumes
2 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
with open
1 -
Women
1 -
Workflow
2 -
Workspace
2
- « Previous
- Next »