Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
I'm trying to use the badRecordsPath to catch improperly formed records in a CSV file and continue loading the remainder of the file. I can get the option to work using python like thisdf = spark.read\
.format("csv")\
.option("header","true")\
.op...
Thanks. It was the inferSchema setting. I tried it with and without the SELECT and it worked both ways when I added inferSchemaBoth of these workeddrop table my_db.t2;
create table my_db.t2 (col1 int,col2 int);
copy into my_db.t2
from (SELECT cast(...
Hi @S Meghala ,Please go through this Github link you will get good amount of data here ,this way you can learn morehttps://github.com/AlexIoannides/pyspark-example-projectPlease select my answer as best answer if your query is fulfilled ThanksAvira...
I've just learned Delta Live Tables on Databricks Academy and have no environment to try it out.I'm wondering what happens to the pipeline if the notebook consists of both normal tables and DLTs. For exampleTable ADLT A that reads and cleans Table AT...
hey ,@S L According to you , you have normal table table A and DLT table Table B , so it will give thrown an error that your upstream table is not streaming Live table and you need to create streaming live table Table a , if you want to use the ou...
I am testing some SQL code based on the book SQL Cookbook Second Edition, available from https://downloads.yugabyte.com/marketing-assets/O-Reilly-SQL-Cookbook-2nd-Edition-Final.pdfBased on Page 43, I am OK with the left join, as shown here:However, w...
Hi, I am using Databricks and want to upgrade to Databricks runtime version 11.3 LTS which uses Spark 3.3 now. Current system enviroment:Operating System: Ubuntu 20.04.4 LTSJava: Zulu 8.56.0.21-CA-linux64Python: 3.8.10Delta Lake: 1.1.0Target system ...
I'm trying to set PYTHONPATH env variable in the cluster configuration: `PYTHONPATH=/dbfs/user/blah`. But in the driver and executor envs it is probably getting overridden and i don't see it.`%sh echo $PYTHONPATH` outputs:`PYTHONPATH=/databricks/spar...
Update:At last found a (hacky) solution!in the driver I can dynamically set the sys.path in the workers with:`spark._sc._python_includes.append("/dbfs/user/blah/")`combine that with, in the driver:```%load_ext autoreload%autoreload 2```and setting: `...
Hello, can I create spark function in .net and use it in DLT table? I would like to encrypt some data, in documentation scala code is being used as an example, but would it be possible to write decryption/encryption function using C# and use it withi...
1. Do we have any feature like merge the cells from one or more notebooks into other notebook.2. Do we have any feature like multiple cells from excel is copied it into multiple cells in a notebook . Generally all excel data is copied it into one cel...
1) We can't merge cells right now2)We don't have this feature as well3) We don't have multiple editing right now4)You will know only if you face an error. A Notification will pop up5)You can"t keep running the execution because the cells can be linke...
Hey, guys, I hope you are doing very well today I was going through some databricks documentation and I found dlt documentation but when I am trying to implement it, it is not working very well can anyone can share with me whole code step by step and...
New to Databricks and here is one thing that confuses me.Since Spark Streaming is already capable of incremental loading by checkpointing. What difference does it make by enabling Auto Loader?
Auto Loader provides a Structured Streaming source called cloudFiles. Given an input directory path on the cloud file storage, the cloudFiles source automatically processes new files as they arrive, with the option of also processing existing files i...
What is a common practice to to write notebook which includes error handling/exception handling.Is there any example which depicts how notebook should be written to include error handling etc.
Understanding Joins in PySpark/DatabricksIn PySpark, a `join` operation combines rows from two or more datasets based on a common key. It allows you to merge data from different sources into a single dataset and potentially perform transformations on...
I set up a workflow using 2 tasks. Just for demo purposes, I'm using an interactive cluster for running the workflow. {
"task_key": "prepare",
"spark_python_task": {
"python_file": "file...
Hi @Fran Pérez,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
There are a number of risks associated with using social networking sites for Dating services, including the possible exploitation of minors, the potential for human trafficking, and the possibility of illegal activities such as money laundering traf...
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.