cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

horatiug
by New Contributor III
  • 2578 Views
  • 3 replies
  • 2 kudos

Resolved! Changing GCP billing account

Hello we need to change the billing account associated with our Databricks subscription. Is there any documentation available  describing the procedure to be followed ? ThanksHoratiu

  • 2578 Views
  • 3 replies
  • 2 kudos
Latest Reply
Priyag1
Honored Contributor II
  • 2 kudos

Start by logging into the Google Cloud Platform. If you are a new user, you need to create an account before you subscribe to Data bricks. Once in the console, start by selecting an existing Google Cloud project, or create a new project, and confirm ...

  • 2 kudos
2 More Replies
horatiug
by New Contributor III
  • 1005 Views
  • 1 replies
  • 0 kudos

Infrastructure question

We've noticed that the GKE worker nodes which are automatically created when Databricks workspace is created inside GCP project are using the default compute engine SA which's not the best security approach, even Google doesn't recommend using defaul...

  • 1005 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
We've noticed that the GKE worker nodes which are automatically created when Databricks workspace is created inside GCP project are using the default compute engine SA which's not the best security approach, even Google doesn't recommend using defaul...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
nramya
by New Contributor
  • 1006 Views
  • 0 replies
  • 0 kudos

How do I add static tag values in the aws databricks-multi-workspace.template.yaml

Hello Team, I have a databricks workspace running on an AWS environment. I have a requirement where the team wanted to add a few customized  tags as per the docs I see below the recommendationTagValue:Description: All new AWS objects get a tag with t...

  • 1006 Views
  • 0 replies
  • 0 kudos
DatBoi
by Contributor
  • 3499 Views
  • 1 replies
  • 0 kudos

Recreating Unity Catalog object through different environments

Hi all! I am working on a DevOps project to automate the creation of UC objects through different environments (dev-test-prod). Each time we deploy our code to a different environment (using a Github workflow, not really relevant) we want to also cre...

  • 3499 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Hi all! I am working on a DevOps project to automate the creation of UC objects through different environments (dev-test-prod). Each time we deploy our code to a different environment (using a Github workflow, not really relevant) we want to also cre...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
Rsa
by New Contributor II
  • 3863 Views
  • 4 replies
  • 2 kudos

CI/CD pipeline using Github

Hi Team,I've recently begun working with Databricks and I'm exploring options for setting up a CI/CD pipeline to pull the latest code from GitHub.I have to pull latest code(.sql) from Github whenever push is done to main branch and update .sql notebo...

  • 3863 Views
  • 4 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

FWIW:we pull manually, but it is possible to automate that without any cost if you use Azure Devops.  There is a free tier (depending on the number of pipelines/duration).

  • 2 kudos
3 More Replies
UN
by New Contributor II
  • 2767 Views
  • 4 replies
  • 1 kudos

Azure Databricks Workspace Editor - Cursor messed up - cannot edit code

I have been using the Azure Databricks Workspace Editor for a few weeks to put together a python script as well as a notebook.All was well, until yesterday evening. Since then I suddenly have the following issuethe cursor in the editor is misbehaving...

  • 2767 Views
  • 4 replies
  • 1 kudos
Latest Reply
UN
New Contributor II
  • 1 kudos

Thanks @Chibberto - I will try the zoom level to see if it makes a difference.In the meantime, the latest issue is that the Autosave is not kicking in sometimes for several minutes. So, if I make a change and then re-run the job - the latest code is ...

  • 1 kudos
3 More Replies
Wayne
by New Contributor III
  • 1084 Views
  • 1 replies
  • 1 kudos

Resolved! Databricks Contact us form not working

I got some issues with Databricks online certification.  I filed twice at (https://help.databricks.com/s/contact-us?ReqType=training), but did not get any confirmation emails.@Cert-Team

  • 1084 Views
  • 1 replies
  • 1 kudos
Latest Reply
Wayne
New Contributor III
  • 1 kudos

@Kaniz @Cert-Team Finally figured out why my request got dropped silently - I included a link in the form. Please indicate no link in the form submission section. Thanks a lot.

  • 1 kudos
mipayof346
by New Contributor
  • 1311 Views
  • 1 replies
  • 1 kudos

Resolved! Beginner here. Which exam to do first?

I am new to Databricks and would like to learn and become certified. I have SQL knowledge.To get started, which exam should I do first so that I have a very good understanding of Databricks fundamentals and concepts?I was thinking of “Databricks Cert...

  • 1311 Views
  • 1 replies
  • 1 kudos
Latest Reply
Cert-Team
Databricks Employee
  • 1 kudos

Hi @mipayof346 The Data Engineer Associate certification exam assesses an individual’s ability to use the Databricks Lakehouse Platform to complete introductory data engineering tasks. This includes an understanding of the Lakehouse Platform and its ...

  • 1 kudos
Hareesh1980
by New Contributor
  • 1036 Views
  • 0 replies
  • 0 kudos

Calculation on a dataframe

Hi, I need to do following calculations on a dataframe. It should be done for each period and calculated value will be used for next period's calculation. Adding sample data and formula from excel here. Thanks in advance for your help.Need to calcula...

  • 1036 Views
  • 0 replies
  • 0 kudos
mipayof346
by New Contributor
  • 1081 Views
  • 1 replies
  • 0 kudos

New to databricks, which certification should I start?

I’m new to Databricks and don’t have any practical experience. I can write SQL code fluently. I’d like to get Databricks certified, which exam should I start to get a good understanding of the fundamentals?Also, what are the most important basic conc...

  • 1081 Views
  • 1 replies
  • 0 kudos
Latest Reply
JodySF
Databricks Employee
  • 0 kudos

@mipayof346 it's best to start with the Lakehouse Fundamentals Accreditation (free course, free assessment). Then I recommend that you move to the Data Analyst certification path. Details on both can be found here: https://www.databricks.com/learn/ce...

  • 0 kudos
maartenvr
by New Contributor III
  • 11699 Views
  • 5 replies
  • 1 kudos

Installed Library / Module not found through Databricks connect LST 12.2

Hi all,We recently upgraded our databricks compute cluster from runtime version 10.4 LST, to 12.2 LST.After the upgrade one of our python scripts suddenly fails with a module not found error; indicating that our customly created module "xml_parser" i...

  • 11699 Views
  • 5 replies
  • 1 kudos
Latest Reply
maartenvr
New Contributor III
  • 1 kudos

FYI: For now we have found a workaround.We are adding the package as ZIP file to the current spark session with .addyFiles.So after creating a spark session using Databricks-connect we run the following:spark.sparkContext.addPyFile("C:/path/to/custom...

  • 1 kudos
4 More Replies
vk217
by Contributor
  • 1295 Views
  • 1 replies
  • 1 kudos

Resolved! Selective column loader unity catalog

I am loading a table into a data frame using df = spark.table(table_name) Is there a way to load only the required columns? The table has more than 50+ columns and I only need a handful of column.

  • 1295 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@vk217 Simply just use select function, ex.df = spark.read.table(table_name).select("col1", "col2", "col3")

  • 1 kudos
Sai1098
by New Contributor II
  • 1415 Views
  • 1 replies
  • 0 kudos

Reading tables from different databricks clusters

Hello,My organization uses two cluster for dev and Prod. We mount our azure blobs on to delta lake to store the delta tables. Prod has bunch of data and dev has limited data. I want to move the data from prod to dev for testing purposes. How can I do...

  • 1415 Views
  • 1 replies
  • 0 kudos
Latest Reply
btafur
Databricks Employee
  • 0 kudos

It depends on the current setup, how your clusters are working right now and how your data is stored. One alternative could be mount the Dev storage to the Prod cluster and execute a DEEP CLONE (https://docs.databricks.com/en/sql/language-manual/delt...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels