- 1681 Views
- 2 replies
- 0 kudos
Hello,We have a databricks account & workspace, provided by AWS with SSO enabled. Is there any way to access databricks workspace API ( jobs/clusters, etc ) using a token retrieved from IdentityProvider ? We can access databricks workspace API with A...
- 1681 Views
- 2 replies
- 0 kudos
Latest Reply
Hey - Costin and Anonymous user, have you managed to get this working, do you have examples by any chance?I'm also trying something similar but I haven't been able to make it work.> authenticate and access the Databricks REST API by setting the Autho...
1 More Replies
- 2436 Views
- 1 replies
- 0 kudos
Method 1: Using "com.crealytics.spark.excel" package, how do I import the package?Method 2: Using pandas I tried the possible paths, but file not found it shows, nor while uploading the xls/xlsx file it shows options for importing the dataframe.Help ...
- 2436 Views
- 1 replies
- 0 kudos
Latest Reply
import pandas as pd ExcelData = pd.read_excel("/dbfs"+FilePath, sheetName) # make sure you add /dbfs to FilePath
- 1243 Views
- 2 replies
- 0 kudos
I successfully registered in my Unity Catalog an external Database ```dwcore``` that is hosted on SQL server.I first added the connection in "External Data": tested the connection and it was successful.I then added the database on top: tested the con...
- 1243 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @AurelioGesino, It seems you’ve encountered an issue with table names when connecting to an external SQL Server database in Databricks.
Let’s break down the situation and explore potential solutions:
Table Name Case Sensitivity:
You’ve correc...
1 More Replies
- 816 Views
- 1 replies
- 0 kudos
I need to re run the compete job automatically if any of its associated task gets failed, any help would be appreciable. Thanks
- 816 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Milliman, In Databricks, you can automate the re-run of a job if any of its associated tasks fail.
Here are some steps to achieve this:
Conditional Task Execution:
You can specify “Run if dependencies” to run a task based on the run status o...
- 582 Views
- 1 replies
- 0 kudos
Hi,Does anyone know how to link Aurora to Databricks directly and load data into Databricks automatically on a schedule without any third-party tools in the middle?
- 582 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @creditorwatch, To ingest data into Databricks directly from Amazon Aurora and automate the process on a schedule, you have a few options.
Let’s explore them:
Auto Loader (Recommended):
Auto Loader is a powerful feature in Databricks that eff...
by
Bas1
• New Contributor III
- 8784 Views
- 17 replies
- 20 kudos
In Azure Databricks the DBFS storage account is open to all networks. Changing that to use a private endpoint or minimizing access to selected networks is not allowed.Is there any way to add network security to this storage account? Alternatively, is...
- 8784 Views
- 17 replies
- 20 kudos
Latest Reply
How can we secure the storage account in the managed resource group which holds the DBFS with restricted network access, since access from all networks is blocked by our Azure storage account policy?
16 More Replies
by
alm
• New Contributor III
- 5042 Views
- 6 replies
- 2 kudos
I have a medallion architecture: Bronze layer: Raw data in tablesSilver layer: Refined data in views created from the bronze layerGold layer: Data products as views created from the silver layerCurrently I have a data scientist that needs access to d...
- 5042 Views
- 6 replies
- 2 kudos
Latest Reply
Single-user clusters use a different security mode which is the reason for this difference.
On single-user/assigned clusters, you'll need the Fine Grained Access Control service (which is a Serverless service) - that is the solution to this problem (...
5 More Replies
- 2716 Views
- 4 replies
- 1 kudos
I'm trying to addmonotonicallyIncreasingId() column to a streaming table and I see the following errorFailed to start stream [table_name] in either append mode or complete mode.
Append mode error: Expression(s): monotonically_increasing_id() is not s...
- 2716 Views
- 4 replies
- 1 kudos
Latest Reply
Is aggregations with row_number() combined with a SQL window function and a watermark still supported in Databricks 14.3?
3 More Replies
- 2152 Views
- 5 replies
- 0 kudos
Hi team, When I create a DLT job, is there a way to control the cluster runtime version somewhere? E.g. I want to use 14.3 LTS. I tried to add `"spark_version": "14.3.x-scala2.12",` inside cluster default label but not work.Thanks
- 2152 Views
- 5 replies
- 0 kudos
Latest Reply
Thanks. Got it.And the cluster has to be share mode. Can different DLT jobs share clusters or when DLT job is running, can other people use the cluster? Seems each DLT job running will start a new cluster. If it is not be able to shared, why it has t...
4 More Replies
- 751 Views
- 1 replies
- 0 kudos
Can someone explain why this below code is throwing an error? My intuition is telling me it's my spark version (3.2.1) but would like confirmation:d = {'key':['a','a','c','d','e','f','g','h'],
'data':[1,2,3,4,5,6,7,8]}
x = ps.DataFrame(d)
x[x['...
- 751 Views
- 1 replies
- 0 kudos
Latest Reply
@pjp94 - The error indicates the pandas pyspark implementation does not have the below method implemented.
pd.Series.duplicated()
Next steps is to use dataframe methods such as distinct, groupBy, dropDuplicates to resolve this.
- 1051 Views
- 1 replies
- 0 kudos
TimeoutException: Stream Execution thread for stream [id = xxx runId = xxxx] failed to stop within 15000 milliseconds (specified by spark.sql.streaming.stopTimeout). See the cause on what was being executed in the streaming query thread.I have a data...
- 1051 Views
- 1 replies
- 0 kudos
Latest Reply
@User_1611 - could you please try the following ?
Reduce the number of streaming queries running on the same clusterMake sure your code does not try to re-trigger/start an active streaming queryMake sure to collect the thread dumps if this error hap...
by
Shan1
• New Contributor II
- 2084 Views
- 5 replies
- 0 kudos
I have 50k + parquet files in the in azure datalake and i have mount point as well. I need to read all the files and load into a dataframe. i have around 2 billion records in total and all the files are not having all the columns, column order may di...
- 2084 Views
- 5 replies
- 0 kudos
Latest Reply
@Shan1 - This could be due to the files have cols that differ by data type. Eg. Integer vs long , Boolean vs integer. can be resolved by schemaMerge=False. Please refer to this code. https://github.com/apache/spark/blob/418bba5ad6053449a141f3c9c31e...
4 More Replies
- 1757 Views
- 3 replies
- 1 kudos
Hi everyone,I am using DBR version 13 and Managed tables in a custom catalog location of table is AWS S3.running notebook on single user clusterI am facing MalformedInputException while saving data to Tables or reading it.When I am running my noteboo...
- 1757 Views
- 3 replies
- 1 kudos
Latest Reply
@Kaniz_Fatma The issue is resolved as soon as I deployed it to mutlinode dev cluster.Issue is only occurring in single user clusters. Looks like limitation of running all updates in one node as distributed system.
2 More Replies
- 930 Views
- 2 replies
- 1 kudos
There is no resource to create All Purpose Cluster, but I need it, so does it mean I should create it via Terraform or DBX and reference to it, which I dont prefer?
- 930 Views
- 2 replies
- 1 kudos
Latest Reply
Hello @Ayushi_Suthar, Thanks for the quick reply! Where can I see these requests?https://ideas.databricks.com/ideas/DB-I-9451 ?
1 More Replies