cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

saltuk
by Contributor
  • 2708 Views
  • 0 replies
  • 0 kudos

Using Parquet, passing Partition on Insert Overwrite. Partition parenthesis includes equitation and it gives an error.

I am new on Spark sql, we are migrating our Cloudera to Databricks. there are a lot of SQLs done, only a few are on going. We are having some troubles during passing an argument and using it in an equitation on Partition section. LOGDATE is an argu...

  • 2708 Views
  • 0 replies
  • 0 kudos
Oricus_semicon
by New Contributor
  • 862 Views
  • 0 replies
  • 0 kudos

oricus-semicon.com

Oricus Semicon Solutions is an innovative Semiconductor Tools manufacturing company who, with almost 100 years of collective expertise, craft high tech bespoke tooling solutions for the global Semiconductor Assembly and Test industry.https://oricus-s...

  • 862 Views
  • 0 replies
  • 0 kudos
chaitanya
by New Contributor II
  • 4495 Views
  • 2 replies
  • 4 kudos

Resolved! While loading Data from blob to delta lake facing below issue

I'm calling the stored proc then store into pandas dataframe then creating list while creating list getting below error Databricks execution failed with error state Terminated. For more details please check the run page url: path An error occurred w...

  • 4495 Views
  • 2 replies
  • 4 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 4 kudos

@chaitanya​ , could you please try disabling arrow optimization and see if this resolves the issue?spark.sql.execution.arrow.enabled falsespark.sql.execution.arrow.pyspark.enabled false

  • 4 kudos
1 More Replies
sanjoydas6
by New Contributor III
  • 9929 Views
  • 7 replies
  • 1 kudos

Problem faced while trying to Reset my Community Edition Password

I have forgotten my Databricks Community Edition Password and is trying to Reset the same using the Forgot Password link. It is saying that an Email will be sent with the link to reset the password but the Email is not coming. However Databricks mail...

  • 9929 Views
  • 7 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

@Sanjoy Das​ - Popping in here to let you know that we've escalated the issue to the team.

  • 1 kudos
6 More Replies
maranBH
by New Contributor III
  • 3351 Views
  • 3 replies
  • 1 kudos

Resolved! Trained model artifact, CI/CD and Databricks without MLFlow.

Hi all,We are constructing our CI/CD pipelines with the Repos feature following this guide:https://databricks.com/blog/2021/09/20/part-1-implementing-ci-cd-on-databricks-using-databricks-notebooks-and-azure-devops.htmlI'm trying to implement my pipes...

  • 3351 Views
  • 3 replies
  • 1 kudos
Latest Reply
sean_owen
Databricks Employee
  • 1 kudos

So you are managing your models with MLflow, and want to include them in a git repository?You can do that in a CI/CD process; it would run the mlflow CLI to copy the model you want (e.g. model:/my_model/production) to a git checkout and then commit i...

  • 1 kudos
2 More Replies
dimsh
by Contributor
  • 14217 Views
  • 3 replies
  • 1 kudos

Resolved! Delta Table is not available in the Databricks SQL

Hi, there!I'm trying to read a data (simple SELECT * FROM schema.tabl_a) from the "Queries" Tab inside the Databricks SQL platform, but always getting "org.apache.spark.sql.AnalysisException: dbfs:/.../.. doesn't exist" DescribeRelation true, [col_na...

  • 14217 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Because it's a delta table, you don't need to provide the schema.

  • 1 kudos
2 More Replies
RicksDB
by Contributor III
  • 7385 Views
  • 6 replies
  • 6 kudos

Resolved! SingleNode all-purpose cluster for small ETLs

Hi,I have many "small" jobs than needs to be executed quickly and at a predictable low cost from several Azure Data Factory pipelines. For this reason, I configured a small single node cluster to execute those processes. For the moment, everything se...

image
  • 7385 Views
  • 6 replies
  • 6 kudos
Latest Reply
RicksDB
Contributor III
  • 6 kudos

@Bilal Aslam​  In my case, it usually depends on the customers and their SLA. Most of them usually do not have a "true" high SLA requirement thus prefer the jobs to be throttled when the actual cost is within a certain range of the budget instead of ...

  • 6 kudos
5 More Replies
Anonymous
by Not applicable
  • 10181 Views
  • 7 replies
  • 3 kudos

Resolved! Issue with quotes in struct type columns when using ODBC

I'm trying to connect to Databricks using pyodbc and I'm running into an issue with struct columns. As far as I understand, struct columns and array columns are not supported by pyodbc, but they are converted to JSON. However, when there are nested c...

  • 10181 Views
  • 7 replies
  • 3 kudos
Latest Reply
BilalAslamDbrx
Databricks Employee
  • 3 kudos

@Derk Crezee​ - I learned something today. Apparently ODBC does not convert to JSON. There is no defined spec on how to return complex types, in fact that was added only in SQL 2016. That's exactly what you are running into!End of history lesson Her...

  • 3 kudos
6 More Replies
RicksDB
by Contributor III
  • 6007 Views
  • 9 replies
  • 1 kudos

Configure jobs throttling for ephemeral cluster ETLs

Hi,Is it possible to configure job throttling in order to queue jobs across a workspace after a given number of concurrent execution when using the ephemeral cluster pattern? The reason is mainly for cost control. We prefer reducing performance rathe...

  • 6007 Views
  • 9 replies
  • 1 kudos
Latest Reply
RicksDB
Contributor III
  • 1 kudos

Thanks for the help josephk. I will continue to use an interactive cluster for the time being until the release of that new feature. Hopefully, it will allow my use case. Is there visibility on the roadmap for an ETA or more information on it?

  • 1 kudos
8 More Replies
barashe
by New Contributor II
  • 2152 Views
  • 1 replies
  • 0 kudos

Installing python modules on databricks job clusters

Different than all-purpose clusters, the databricks job new cluster configuration window does not have a "Libraries" tab, in which specific python modules could be installed. What's the best practice for installing python modules on such clusters?

  • 2152 Views
  • 1 replies
  • 0 kudos
Latest Reply
barashe
New Contributor II
  • 0 kudos

It turns out that the option exists outside of the cluster configuration scope, in the task configuration window itself - under "Advanced options" -> "Add dependent libraries".

  • 0 kudos
pthaenraj
by New Contributor III
  • 8748 Views
  • 10 replies
  • 14 kudos

Resolved! Databricks Certified Professional Data Scientist Exam Question Types

Hello,I am not seeing a lot of information regarding the Databricks Certified Professional Data Scientistexam. I took the Associate Developer in Apache Spark Exam last year and the materials for the exam seemed much more focused than what I found for...

  • 8748 Views
  • 10 replies
  • 14 kudos
Latest Reply
Abdull
New Contributor III
  • 14 kudos

Hello @Sundar R​ , Yes I took the exam. Unfortunately I fail to reach the pass mark even though I got close. Things I could have did different:I focused so much in mastering each topics i.e. linear, logistic & regularized regression, ALS and etc. But...

  • 14 kudos
9 More Replies
YSF
by New Contributor III
  • 3539 Views
  • 2 replies
  • 1 kudos

Resolved! Issues with using Databricks-Connect and Petastorm

Has anyone successfully used Petastorm + Databricks-Connect + Delta Lake?The use case is being able to use DeltaLake as a data store regardless of whether I want to use the databricks workspace or not for my training tasks.I'm using a cloud-hosted ju...

  • 3539 Views
  • 2 replies
  • 1 kudos
Latest Reply
YSF
New Contributor III
  • 1 kudos

because its janky or why? I don't need it for customer facing production. More so for if I'm using my own HPC or local workstation, but I want to access data from delta lake. Figured it was easier/preferable to setting up my own spark environment loc...

  • 1 kudos
1 More Replies
guruv
by New Contributor III
  • 22177 Views
  • 4 replies
  • 5 kudos

Resolved! parquet file to include partitioned column in file

HI,I have a daily scheduled job which processes the data and write as parquet file in a specific folder structure like root_folder/{CountryCode}/parquetfiles. Where each day job will write new data for countrycode under the folder for countrycodeI am...

  • 22177 Views
  • 4 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 5 kudos

Most external consumers will read partition as column when are properly configured (for example Azure Data Factory or Power BI).Only way around is that you will duplicate column with other name (you can not have the same name as it will generate conf...

  • 5 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels