Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Learn the basics with these resources: Register for an AWS Onboarding Webinar or an Azure Quickstart Lab- Learn the fundamentals from a Customer Success Engineer & get all your onboarding questions answered live.Started using Databricks, but have que...
Welcome to Databricks! Here you will find resources for a successful onboarding experience. In this group you can ask quick questions and have them answered by experts to unblock and accelerate your ramp up with Databricks.
Hi, I have problems with displaying and saving a table in Databricks. Simple command can run for hours without any progress..Before that I am not doing any rocket science - code runs in less than a minute, I have one join at the end. I am using 7.3 ...
hi @Just Magy ,what is your data source? what type of lazy transformation and actions do you have in your code? Do you partition your data? Please provide more details.
I am using a framework and i have a query where i am doing,df = seg_df.select(*).write.option("compression", "gzip') and i am getting below error,When i don't do the write.option i am not getting below error. Why is it giving me repartition error. Wh...
Currently, we are investigating how to effectively incorporate databricks latest feature for orchestration of tasks - Multi-task Jobs.The default behaviour is that a downstream task would not be executed if the previous one has failed for some reason...
Hi @Stefan V ,My name is Jan and I'm a product manager working on job orchestration. Thank you for your question. At the moment this is not something directly supported yet, this is however on our radar. If you are interested in having a short conve...
Dear community,I have the following problem:%fs mv '/FileStore/Tree_point_classification-1.dlpk' '/dbfs/mnt/group22/Tree_point_classification-1.dlpk'I have uploaded a file of a ML-model and have transferred it to the directory with When I now check ...
There is dbfs:/dbfs/ displayed maybe file is in /dbfs/dbfs directory? Please check it and try to open with open('/dbfs/dbfs. You can also use "data" from left menu to check what is in dbfs file system more easily.
Nore, I've tested with the same connection variable:locally with scala - works (via the same prod schema registry)in the cluster with python - worksin the cluster with scala - fails with 401 auth errordef setupSchemaRegistry(schemaRegistryUrl: String...
Found the issue: it's the uber package mangling some dependency resolving, which I fixedAnother issue, is that currently you can't use 6.* branch of confluent schema registry client in databricks, because the avro version is different then the one su...
We are using data-bricks. How do we know the default libraries installed in the databricks & what versions are being installed. I have ran pip list, but couldn't find the pyspark in the returned list.
Hi @karthick J ,If you would like to see all the libraries installed in your cluster and the version, then I will recommend to check the "Environment" tab. In there you will be able to find all the libraries installed in your cluster.Please follow t...
(This is a copy of a question I asked on stackoverflow here, but maybe this community is a better fit for the question):Setting: Delta-lake, Databricks SQL compute used by powerbi. I am wondering about the following scenario: We have a column `timest...
In query I would just query first by date (generated from timestamp which we want to query) and than by exact timestamp, so it will use partitioning benefit.
When deleting a workspace from the Databricks Accounts Console, I noticed the AWS resources (VPC, NAT, etc.) are not removed. Should they be? And if not, is there a clean/simple way of cleaning up the residual AWS resources?
Thank you Prabakar - that's what I figured but didn't know if there was documentation on resource cleanup. I'll just go through and find everything the CF stack created and remove them.Regards,Brad
1. I have data x,I would like to create a new column with the condition that the value are 1, 2 or 32. The name of the column is SHIFT where this SHIFT column will be filled automatically if the TIME_CREATED column meets the conditions.3. the conditi...
You an do something like this in pandas. Note there could be a more performant way to do this too. import pandas as pd
import numpy as np
df = pd.DataFrame({'a':[1,2,3,4]})
df.head()
> a
> 0 1
> 1 2
> 2 3
> 3 4
conditions = [(df['a'] <=2...
Are there any plans / capabilities in place or approaches people are using for writing (logging) records failing constraint requirements to separate tables when using Delta Live Tables? Also, are there any plans / capabilities in place or approaches ...
According to the language reference documentation, I do not believe quarantining records is possible right now out of the box. But there are a few workarounds under the current functionality. Create a second table with the inverse of the expectations...
The below code executes a 'get' api method to retrieve objects from s3 and write to the data lake.The problem arises when I use dbutils.secrets.get to get the keys required to establish the connection to s3my_dataframe.rdd.foreachPartition(partition ...
Howdy @Sandesh Puligundla - Thank you for your question. Thank you for your patience. I'd like to give this a bit longer to see how the community responds. Hang tight!
As title, I need to clone code from my private git repo, and use it in my notebook, I do something likedef cmd(command, cwd=None):
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, cwd=cwd)
output, error = process.communicate(...
Hi @Andy Huang , Yes, you can do it if it's accessible from Databricks. Please refer to: https://docs.databricks.com/repos.html#repos-for-git-integrationDatabricks does not support private Git servers, such as Git servers behind a VPN.
I have few fundamental questions in Spark3 while running a simple Spark app in my local mac machine (with 6 cores in total). Please help.local[*] runs my Spark application in local mode with all the cores present on my mac, correct? It also means tha...
That is a lot of questions in one topic.Let's give it a try:[1] this all depends on the values of the concerning parameters and the program you run(think joins, unions, repartition etc)[2] spark.default.parallelism is by default the number of cores *...