cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Milliman
by New Contributor
  • 440 Views
  • 1 replies
  • 0 kudos

How could we automatically re run the complete job if any of its associted task fails.?

I need to re run the compete job automatically if any of its associated task gets failed, any help would be appreciable. Thanks

  • 440 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Milliman, In Databricks, you can automate the re-run of a job if any of its associated tasks fail. Here are some steps to achieve this: Conditional Task Execution: You can specify “Run if dependencies” to run a task based on the run status o...

  • 0 kudos
creditorwatch
by New Contributor
  • 267 Views
  • 1 replies
  • 0 kudos

Load data from Aurora to Databricks directly

Hi,Does anyone know how to link Aurora to Databricks directly and load data into Databricks automatically on a schedule without any third-party tools in the middle?

  • 267 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @creditorwatch, To ingest data into Databricks directly from Amazon Aurora and automate the process on a schedule, you have a few options.    Let’s explore them:   Auto Loader (Recommended): Auto Loader is a powerful feature in Databricks that eff...

  • 0 kudos
Bas1
by New Contributor III
  • 5967 Views
  • 17 replies
  • 20 kudos

Resolved! network security for DBFS storage account

In Azure Databricks the DBFS storage account is open to all networks. Changing that to use a private endpoint or minimizing access to selected networks is not allowed.Is there any way to add network security to this storage account? Alternatively, is...

  • 5967 Views
  • 17 replies
  • 20 kudos
Latest Reply
Odee79
New Contributor II
  • 20 kudos

How can we secure the storage account in the managed resource group which holds the DBFS with restricted network access, since access from all networks is blocked by our Azure storage account policy?

  • 20 kudos
16 More Replies
alm
by New Contributor III
  • 3186 Views
  • 6 replies
  • 1 kudos

Resolved! How to grant access to views without granting access to underlying tables

I have a medallion architecture: Bronze layer: Raw data in tablesSilver layer: Refined data in views created from the bronze layerGold layer: Data products as views created from the silver layerCurrently I have a data scientist that needs access to d...

  • 3186 Views
  • 6 replies
  • 1 kudos
Latest Reply
MoJaMa
Valued Contributor II
  • 1 kudos

Single-user clusters use a different security mode which is the reason for this difference. On single-user/assigned clusters, you'll need the Fine Grained Access Control service (which is a Serverless service) - that is the solution to this problem (...

  • 1 kudos
5 More Replies
Rishitha
by New Contributor III
  • 1880 Views
  • 4 replies
  • 1 kudos

Delta live tables straming

I'm trying to addmonotonicallyIncreasingId() column to a streaming table and I see the following errorFailed to start stream [table_name] in either append mode or complete mode. Append mode error: Expression(s): monotonically_increasing_id() is not s...

  • 1880 Views
  • 4 replies
  • 1 kudos
Latest Reply
Niro
New Contributor II
  • 1 kudos

Is aggregations with row_number() combined with a SQL window function and a watermark still supported in Databricks 14.3?

  • 1 kudos
3 More Replies
Brad
by Contributor
  • 1219 Views
  • 5 replies
  • 0 kudos

Is there a way to control the cluster runtime version for DLT

Hi team, When I create a DLT job, is there a way to control the cluster runtime version somewhere? E.g. I want to use 14.3 LTS. I tried to add `"spark_version": "14.3.x-scala2.12",` inside cluster default label but not work.Thanks

  • 1219 Views
  • 5 replies
  • 0 kudos
Latest Reply
Brad
Contributor
  • 0 kudos

Thanks. Got it.And the cluster has to be share mode. Can different DLT jobs share clusters or when DLT job is running, can other people use the cluster? Seems each DLT job running will start a new cluster. If it is not be able to shared, why it has t...

  • 0 kudos
4 More Replies
Phani1
by Valued Contributor
  • 3527 Views
  • 7 replies
  • 8 kudos

Delta Live Table name dynamically

Hi Team,Can we pass Delta Live Table name dynamically [from a configuration file, instead of hardcoding the table name]? We would like to build a metadata-driven pipeline.

  • 3527 Views
  • 7 replies
  • 8 kudos
Latest Reply
Azure_dbks_eng
New Contributor II
  • 8 kudos

I am observing same error while I adding dataset.tablename. org.apache.spark.sql.catalyst.ExtendedAnalysisException: Materializing tables in custom schemas is not supported. Please remove the database qualifier from table 'streaming.dlt_read_test_fil...

  • 8 kudos
6 More Replies
pjp94
by Contributor
  • 313 Views
  • 1 replies
  • 0 kudos

pyspark.pandas PandasNotImplementedError

Can someone explain why this below code is throwing an error? My intuition is telling me it's my spark version (3.2.1) but would like confirmation:d = {'key':['a','a','c','d','e','f','g','h'], 'data':[1,2,3,4,5,6,7,8]} x = ps.DataFrame(d) x[x['...

  • 313 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

@pjp94  - The error indicates the pandas pyspark implementation does not have the below method implemented. pd.Series.duplicated() Next steps is to use dataframe methods such as distinct, groupBy, dropDuplicates to resolve this.

  • 0 kudos
User_1611
by New Contributor
  • 480 Views
  • 1 replies
  • 0 kudos

TimeoutException: Stream Execution thread for stream [xxxxxx]failed to stop within 15000 millisecond

TimeoutException: Stream Execution thread for stream [id = xxx runId = xxxx] failed to stop within 15000 milliseconds (specified by spark.sql.streaming.stopTimeout). See the cause on what was being executed in the streaming query thread.I have a data...

  • 480 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

@User_1611  - could you please try the following ? Reduce the number of streaming queries running on the same clusterMake sure your code does not try to re-trigger/start an active streaming queryMake sure to collect the thread dumps if this error hap...

  • 0 kudos
Shan1
by New Contributor II
  • 857 Views
  • 5 replies
  • 0 kudos

Read large volume of parquet files

I have 50k + parquet files in the in azure datalake and i have mount point as well. I need to read all the files and load into a dataframe. i have around 2 billion records in total and all the files are not having all the columns, column order may di...

  • 857 Views
  • 5 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

@Shan1 - This could be due to the files have cols that differ by data type.  Eg. Integer vs long , Boolean vs integer. can be resolved by schemaMerge=False. Please refer to this code.  https://github.com/apache/spark/blob/418bba5ad6053449a141f3c9c31e...

  • 0 kudos
4 More Replies
Chandraw
by New Contributor III
  • 1008 Views
  • 3 replies
  • 1 kudos

Resolved! Malformed Input Exception while saving or retreiving Table

Hi everyone,I am using DBR version 13 and Managed tables in a custom catalog location of table is AWS S3.running notebook on single user clusterI am facing MalformedInputException while saving data to Tables or reading it.When I am running my noteboo...

  • 1008 Views
  • 3 replies
  • 1 kudos
Latest Reply
Chandraw
New Contributor III
  • 1 kudos

@Kaniz  The issue is resolved as soon as I deployed it to mutlinode dev cluster.Issue is only occurring in single user clusters. Looks like limitation of running all updates in one node as distributed system.

  • 1 kudos
2 More Replies
BerkerKozan
by New Contributor III
  • 411 Views
  • 2 replies
  • 1 kudos

Creating All Purpose Cluster in Data Asset Bundles

There is no resource to create All Purpose Cluster, but I need it, so does it mean I should create it via Terraform or DBX and reference to it, which I dont prefer?

  • 411 Views
  • 2 replies
  • 1 kudos
Latest Reply
BerkerKozan
New Contributor III
  • 1 kudos

Hello @Ayushi_Suthar, Thanks for the quick reply! Where can I see these requests?https://ideas.databricks.com/ideas/DB-I-9451 ? 

  • 1 kudos
1 More Replies
Andriy
by New Contributor II
  • 511 Views
  • 3 replies
  • 1 kudos

Get Job Run Status

Is there a way to get a child Job Run status and show the result within the parent notebook execution?Here is the case: I have a master notebook and several child notebooks. As a result, I want to see which notebook failed: For example Notebook job s...

Screenshot 2024-02-06 at 17.41.51.png
  • 511 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hey there! Thanks a bunch for being part of our awesome community!  We love having you around and appreciate all your questions. Take a moment to check out the responses – you'll find some great info. Your input is valuable, so pick the best solution...

  • 1 kudos
2 More Replies
anupam676
by New Contributor II
  • 669 Views
  • 2 replies
  • 1 kudos

Resolved! How can I enable disk cache in this scenario/

I have a notebook where I read multiple tables from delta lake (let say schema is db) and after that I did some sort of transformation (image enclosed) using all these tables lwith transformations like join,filter etc. After transformation and writin...

  • 669 Views
  • 2 replies
  • 1 kudos
Latest Reply
anupam676
New Contributor II
  • 1 kudos

Thank you @shan_chandra 

  • 1 kudos
1 More Replies
vroste
by New Contributor III
  • 884 Views
  • 2 replies
  • 0 kudos

Delta live tables running count output mode?

I have a DLT with a table that I want to contain the running aggregation (for the sake of simplicitly let's assume it's a count) for each value of some key column, using a session window. The input table goes back several years and to clean up aggreg...

  • 884 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @vroste ,  • To configure the update output mode for running aggregation in Delta Live Tables (DLT), use the outputMode option when writing the DLT table.• By default, DLT writes data in complete mode, which outputs the complete result table after...

  • 0 kudos
1 More Replies
Labels
Top Kudoed Authors