cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

amama
by New Contributor II
  • 3483 Views
  • 3 replies
  • 1 kudos

How to run spark sql file through Azure Databricks

We have a process that will write spark sql to a file, this process will generate thousands of spark sql files in the production environment.These files will be created in the ADLS Gen2 directory.sample spark file---val 2023_I = spark.sql("select rm....

  • 3483 Views
  • 3 replies
  • 1 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 1 kudos

@amama - you can mount the ADLS storage location in databricks. Since, this is a scala code, you can use workflow and create tasks to execute these scala code by providing the input as the mount location. 

  • 1 kudos
2 More Replies
marcusmv
by New Contributor II
  • 3653 Views
  • 2 replies
  • 1 kudos

Resolved! Advanced Data Engineering with Databricks course

I'm looking for materials to prepare for the Databricks Certified Professional Data Engineer exam. But I see two courses titled 'Advanced Data Engineering with Databricks' in the academy (E-VDG8QV andE-19WXD1). Which one of these courses should I be ...

Data Engineering
associate
exam
learning
professional
  • 3653 Views
  • 2 replies
  • 1 kudos
Latest Reply
marcusmv
New Contributor II
  • 1 kudos

Does anyone know? Would much appreciate it.

  • 1 kudos
1 More Replies
vpaluch
by New Contributor II
  • 5721 Views
  • 1 replies
  • 0 kudos

External Table from partitioned CSV in Unity Catalog.

When I create an External Table in unity catalog from a flattened csv folder, it  works as expected:     CREATE EXTERNAL LOCATION IF NOT EXISTS raw_data URL 'abfss://raw@storage0account0name.dfs.core.windows.net' WITH ( STORAGE CREDENTIAL `a579a...

Data Engineering
Partitioned_CSV
  • 5721 Views
  • 1 replies
  • 0 kudos
Latest Reply
vpaluch
New Contributor II
  • 0 kudos

Thanks Kaniz,I'm using an External Location authenticated using a Managed Identity. The very same used for the non-partitioned table and many others that works pretty fine. This account has Storage Blob Contributor rights for all containers and folde...

  • 0 kudos
Etyr
by Contributor II
  • 9277 Views
  • 3 replies
  • 1 kudos

databricks.sql.exc.RequestError OpenSession error None

I'm trying to access to a Databricks SQL Warehouse with python. I'm able to connect with a token on a Compute Instance on Azure Machine Learning. It's a VM with conda installed, I create an env in python 3.10.from databricks import sql as dbsql dbsq...

  • 9277 Views
  • 3 replies
  • 1 kudos
Latest Reply
Etyr
Contributor II
  • 1 kudos

The issue was that the new version of databricks-sql-connector (3.0.1) does not handle well error messages. So It gave a generic error and a timeout where it should have given me 403 and instant error message without a 900 second timeout.https://gith...

  • 1 kudos
2 More Replies
Rishabh-Pandey
by Databricks MVP
  • 3925 Views
  • 3 replies
  • 5 kudos

www.linkedin.com

woahhh #Excel plug in for #DeltaSharing.Now I can import delta tables directly into my spreadsheet using Delta Sharing.It puts the power of #DeltaLake into the hands of millions of business users.What does this mean?Imagine a data provider delivering...

  • 3925 Views
  • 3 replies
  • 5 kudos
Latest Reply
udit02
New Contributor II
  • 5 kudos

If you have any uncertainties, feel free to inquire here or connect with me on my LinkedIn profile for further assistance.https://whatsgbpro.org/

  • 5 kudos
2 More Replies
ShankarReddy
by New Contributor II
  • 1729 Views
  • 1 replies
  • 0 kudos

XML Unmarshalling using JAXB from JavaRDD

I have a JavRDD with complex nested xml content that I want to unmarshall using JAXB and get the data in to java objects. Can anyone please help with how can I achieve?Thanks

Data Engineering
java
java spark xml jaxb
jaxb
spark
XML
  • 1729 Views
  • 1 replies
  • 0 kudos
Latest Reply
ShankarReddy
New Contributor II
  • 0 kudos

I hope this should workJavaPairRDD<String, PortableDataStream> jrdd = javaSparkContext.binaryFiles("<path_to_file>");Map<String, PortableDataStream> mp = jrdd.collectAsMap();OutputStream os = new FileOutputStream(f);mp.values().forEach(pd -> { try...

  • 0 kudos
JensH
by New Contributor III
  • 12458 Views
  • 3 replies
  • 3 kudos

Resolved! How to pass parameters to a "Job as Task" from code?

Hi,I would like to use the new "Job as Task" feature but Im having trouble to pass values.ScenarioI have a workflow job which contains 2 tasks.Task_A (type "Notebook"): Read data from a table and based on the contents decide, whether the workflow in ...

Data Engineering
job
parameters
workflow
  • 12458 Views
  • 3 replies
  • 3 kudos
Latest Reply
Walter_C
Databricks Employee
  • 3 kudos

I found the following information: value is the value for this task value’s key. This command must be able to represent the value internally in JSON format. The size of the JSON representation of the value cannot exceed 48 KiB.You can refer to https...

  • 3 kudos
2 More Replies
chandraprakash
by New Contributor
  • 2638 Views
  • 2 replies
  • 0 kudos

Find the size of delta table for each month before partition

We have 38 delta tables. We decided to do partition the delta tables for each month.But we have some small tables as well. So we need find the size of delta tables for each month. So that we can use either partition or Z-orderIs there a way to find t...

Data Engineering
delta
delta_partitions
  • 2638 Views
  • 2 replies
  • 0 kudos
Latest Reply
dennyglee
Databricks Employee
  • 0 kudos

For your tables, I’m curious if you could utilize Liquid Clustering to reduce some of the maintenance issues relating to choosing Z-Order vs. partitioning.   Saying this, one potential way is to read the Delta transaction log and read the Add Info st...

  • 0 kudos
1 More Replies
User16826992185
by Databricks Employee
  • 14481 Views
  • 2 replies
  • 4 kudos

Databricks Auto-Loader vs. Delta Live Tables

What is the difference between Databricks Auto-Loader and Delta Live Tables? Both seem to manage ETL for you but I'm confused on where to use one vs. the other.

  • 14481 Views
  • 2 replies
  • 4 kudos
Latest Reply
Steve_Lyle_BPCS
Databricks Partner
  • 4 kudos

You say "...__would__ be a piece..." and "...DLT __would__ pick up...".Is DLT built upon AL?

  • 4 kudos
1 More Replies
Maxi1693
by New Contributor II
  • 4187 Views
  • 1 replies
  • 0 kudos

Resolved! Error java.lang.NullPointerException using Autoloader

Hi!I am pulling data from a Blob storage to Databrick using Autoloader. This process is working well for almost 10 resources, but for a specific one I am getting this error  java.lang.NullPointerException.Looks like this issue in when I connect to th...

  • 4187 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@Maxi1693  - The value for the schemaEvolutionMode should be a string. could you please try changing the below from .option("cloudFiles.schemaEvolutionMode", None)    to  .option("cloudFiles.schemaEvolutionMode", "none") and let us know. Refe...

  • 0 kudos
FurqanAmin
by New Contributor II
  • 3953 Views
  • 5 replies
  • 1 kudos

Logs not coming up in the UI - while being written to DBFS

I have a few spark-submit jobs that are being run via Databricks workflows. I have configured logging in DBFS and specified a location in my GCS bucket.The logs are present in that GCS bucket for the latest run but whenever I try to view them from th...

FurqanAmin_1-1705921514830.png FurqanAmin_0-1705921735529.png FurqanAmin_0-1705922202627.png
Data Engineering
logging
LOGS
ui
  • 3953 Views
  • 5 replies
  • 1 kudos
Latest Reply
Lakshay
Databricks Employee
  • 1 kudos

Yes, I meant to set it to None. Is the issue specific to any particular cluster? Or do you see the issue with all the clusters in your workspace?

  • 1 kudos
4 More Replies
Noman_Q
by New Contributor II
  • 5850 Views
  • 2 replies
  • 1 kudos

Error Running Delta Live Pipeline.

Hi Guys, I am new to the Delta pipeline. I have created a pipeline and now when i try to run the pipeline i get the error message "PERMISSION_DENIED: You are not authorized to create clusters. Please contact your administrator" even though I can crea...

  • 5850 Views
  • 2 replies
  • 1 kudos
Latest Reply
Noman_Q
New Contributor II
  • 1 kudos

Thank you for responding @Palash01 . thanks for giving me the direction so to get around it i had to get permission to "unrestricted cluster creation". 

  • 1 kudos
1 More Replies
William_Scardua
by Valued Contributor
  • 1494 Views
  • 1 replies
  • 0 kudos

Cluster types pricing

Hy guys,How can I get the pricing of cluster types (standard_D*, standard_E*, standart_F*, etc.) ?Im doing a study to decrease the price of my actual cluster.Have any idea ?Thank you, thank you

  • 1494 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Hey, you can use the pricing calculator here: https://www.databricks.com/product/pricing/product-pricing/instance-types

  • 0 kudos
AChang
by New Contributor III
  • 9787 Views
  • 1 replies
  • 0 kudos

Resolved! Move a folder from Workspace to DBFS

So, I didn't quite set up my model training output directory correctly, and it saved all my model files to the workspace in the git repo I was working in. I am trying to move these files to DBFS, but when I try using dbutils.fs.mv, I get this error: ...

  • 9787 Views
  • 1 replies
  • 0 kudos
Latest Reply
AChang
New Contributor III
  • 0 kudos

Figured it out, just had to use the !cp command, here is what I did, worked perfectly.!cp -r /Workspace/Repos/$RESTOFPATH /dbfs/folder and it put the entire folder i was trying to move, into that dbfs folder.

  • 0 kudos
JJ_LVS1
by Databricks Partner
  • 5412 Views
  • 4 replies
  • 1 kudos

FiscalYear Start Period Is not Correct

Hi, I'm trying to create a calendar dimension including a fiscal year with a fiscal start of April 1. I'm using the fiscalyear library and am setting the start to month 4 but it insists on setting April to month 7.runtime 12.1My code snipet is:start_...

  • 5412 Views
  • 4 replies
  • 1 kudos
Latest Reply
DataEnginner
New Contributor II
  • 1 kudos

 import fiscalyear import datetime def get_fiscal_date(year,month,day): fiscalyear.setup_fiscal_calendar(start_month=4) v_fiscal_month=fiscalyear.FiscalDateTime(year, month, day).fiscal_month #To get the Fiscal Month v_fiscal_quarter=fiscalyea...

  • 1 kudos
3 More Replies
Labels