- 998 Views
- 2 replies
- 1 kudos
Team, I wanted to understand the best way of connecting to Oracle Essbase to ingest data into the delta lake
- 998 Views
- 2 replies
- 1 kudos
Latest Reply
@maddan80 I see that Essbase supports ODBC/JDBC connector. Try utilizing one of those.
1 More Replies
- 1518 Views
- 3 replies
- 2 kudos
Hi Team,I have a query with below construct in my projectSELECT count(*) FROM `catalog`.`schema`.`t_table`WHERE _col_check IN (SELECT DISTINCT _col_check FROM `catalog`.`schema`.`t_check_table`)Actually, there is no column "_col_check" in the sub-que...
- 1518 Views
- 3 replies
- 2 kudos
Latest Reply
Hi @Harsha777 ,What occurs is called column shadowing.What happens is that the column names in main query and sub query are identica and the databricks engine after not finding it in sub query searches in the main query.The simplest way to avoid the...
2 More Replies
- 898 Views
- 2 replies
- 0 kudos
We are setting up new DLT Pipelines using the DLT-Meta package. Everything is going well in bringing our data in from Landing to our Bronze layer when we keep the onboarding JSON fairly vanilla. However, we are hitting an issue when using the cdc_app...
- 898 Views
- 2 replies
- 0 kudos
Latest Reply
Please check these details: https://github.com/databrickslabs/dlt-meta/issues/90
1 More Replies
- 568 Views
- 1 replies
- 0 kudos
I have data files categorized by application and region. Want to know the best way to load them into the Bronze and Silver layers while maintaining proper segregation.For example, in our landing zone, we have a structure of raw files to be loaded usi...
- 568 Views
- 1 replies
- 0 kudos
Latest Reply
If I understand it correctly, you have source files partitioned by application and region in cloud storage that you want to load and would like some suggestions on the Unity Catalog structure. It will definitely depend on how you want the data to be ...
- 1723 Views
- 1 replies
- 0 kudos
Currently I can see Catalog tab instead of Data tab in left side navigation. I am unable to find Data tab -> File browser where I would like to upload one sample orders csv file. Later I want to refer that path in Databricks notebooks as /FileStore/t...
- 1723 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Sudheer89 ,By default - DBFS tab is disabled. As an admin user, you can manage your users’ ability to browse data in the Databricks File System (DBFS) using the visual browser interface.Go to the admin console.Click the Workspace Settings tab.In ...
- 1401 Views
- 1 replies
- 1 kudos
Anyone know how I enforce jobs tags, not the custom tags for cluster. I want to enforce that jobs has certain tags so we can filter our jobs. We are not using Unity Catalog yet.
- 1401 Views
- 1 replies
- 1 kudos
Latest Reply
Currently, enforcing job tags is not a built-in feature in Databricks. However, you can add tags to your jobs when creating or updating them and filter jobs by these tags on the jobs list page.
- 1325 Views
- 1 replies
- 0 kudos
Hi,I am getting the error Constructor public org.apache.spark.ml.feature.Bucketizer(java.lang.String) is not whitelisted. when using a serverless compute cluster. I have seen in some other articles that this is due to high concurrency - does anyone k...
- 1325 Views
- 1 replies
- 0 kudos
Latest Reply
The error you're encountering, "Constructor public org.apache.spark.ml.feature.Bucketizer(java.lang.String) is not whitelisted", typically arises when using a shared mode cluster. This is because Spark ML is not supported in shared clusters due to se...
- 684 Views
- 1 replies
- 0 kudos
Hi Databricks Community,I’m working on setting up SCIM provisioning and need some assistance:SCIM Provisioning URL:Can anyone confirm the correct process to obtain the SCIM Provisioning URL from the Databricks account console? I need to ensure I'm re...
- 684 Views
- 1 replies
- 0 kudos
Latest Reply
Which provider are you using? You can use the doc for Okta provisioning to guide you through the process https://docs.databricks.com/en/admin/users-groups/scim/okta.html
by
637858
• New Contributor II
- 4935 Views
- 2 replies
- 3 kudos
A Databricks account administrator can disable account-wide access to the Personal Compute default policy using the following steps: Navigate to the Databricks Account Console. Click the “Settings” icon. Click the “Feature enablement” tab. Switch the...
- 4935 Views
- 2 replies
- 3 kudos
Latest Reply
Is there no way to prevent using Personal Compute policy from a notebook?Or does my question make sense? In other words is it by design right/immutable to have this policy when creating a notebook?
1 More Replies
- 472 Views
- 1 replies
- 0 kudos
- 472 Views
- 1 replies
- 0 kudos
Latest Reply
Also after running this for a while i get these errors:
- 809 Views
- 1 replies
- 0 kudos
i can not restore my delta table to a previous version after I have enabled columnMapping. The following error is shown:DeltaColumnMappingUnsupportedException: Changing column mapping mode from 'name' to 'none' is not supported.Any idea how to roll b...
- 809 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Data_Alchemist ,In Databricks Runtime 15.3 and above, you can use the DROP FEATURE command to remove column mapping from a table and downgrade the table protocol.Then you can try to restore table to previous version.https://docs.databricks.com/en...
by
NLearn
• New Contributor II
- 1058 Views
- 1 replies
- 0 kudos
Hello everyone, I have a requirement to set databricks alerts. I am struggling with email body template. I was trying to create button using custom template in html but it did not work out. Is there any sample template for alerts which can show in be...
- 1058 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @NLearn ,It should looks like this: <table>
<tr><th>column1 header</th><th>column2 header</th></tr>
{{#QUERY_RESULT_ROWS}}
<tr>
<td>{{column1}}</td> <td>{{column2}}</td>
</tr>
{{/QUERY_RESULT_ROWS}}
</table> If you're going to use custom templa...
- 898 Views
- 1 replies
- 0 kudos
Hello All,I want some tool which can be used in Databricks to convert my SQL Server DDL to my databricks DDL ,For example, I have one DDL in SQL Server as below, CREATE TABLE [schema1].[table_1]([BranchKey] [int] NOT NULL,[Branch Code1] [int] NULL,[...
- 898 Views
- 1 replies
- 0 kudos
Latest Reply
@JRDWaghmare ReMorph transpiler could might be handy here.https://github.com/databrickslabs/remorph
by
NC
• New Contributor III
- 1210 Views
- 4 replies
- 0 kudos
Hi All,I am trying to migrate files from ADLS to newly created UC External Volume. I tried using dbutils.fs.cp but failed.May I know what is the right way to copy files without any transformation from ADLS to UC external volume.Thank you
- 1210 Views
- 4 replies
- 0 kudos
Latest Reply
Instead of copying your files from ADLS to UC volumes, you can create a storage credential and an external location. This allows you to access all your ADLS data directly through the catalog explorer under external locations. For guidance on creating...
3 More Replies
- 4179 Views
- 3 replies
- 1 kudos
Hi, I have recently deployed a new Workspace in AWS and getting the following error when trying to start the cluster:"NPIP tunnel setup failure during launch. Please try again later and contact Databricks if the problem persists. Instance bootstrap f...
- 4179 Views
- 3 replies
- 1 kudos
Latest Reply
Finally, this error was fixed by changing the DHCP configuration of the VPC.
2 More Replies