- 10 Views
- 0 replies
- 0 kudos
Neither standard nor non standard repo seem available. Any idea how to debug/fix this? %r
install.packages("gghighlight", lib="/databricks/spark/R/lib", repos = "http://cran.us.r-project.org")
Warning: unable to access index for repository http://cra...
- 10 Views
- 0 replies
- 0 kudos
- 24 Views
- 0 replies
- 0 kudos
Hi all!I've been dropping and recreating delta tables at the new location. For one table something went wrong and now I cannot nor DROP nor recreate it. It is visible in catalog, however, when I click on the table I see message: [INTERNAL_ERROR] The ...
- 24 Views
- 0 replies
- 0 kudos
- 466 Views
- 6 replies
- 4 kudos
When running my notebook using personal compute with instance profile I am indeed able to readStream from kinesis. But adding it as a DLT with UC, while specifying the same instance-profile in the DLT pipeline setting - causes a "MissingAuthenticatio...
- 466 Views
- 6 replies
- 4 kudos
Latest Reply
We have used the roleArn and role session name like this: CREATE STREAMING TABLE table_name
as SELECT * FROM STREAM read_kinesis (
streamName => 'stream',
initialPosition => 'earliest',
roleArn => 'arn:aws:iam::ACCT_ID:role/R...
5 More Replies
- 123 Views
- 5 replies
- 0 kudos
Hello, I am trying to connect the power bi semantic model output (basically the data that has already been pre processed) to databricks. Does anybody know how to do this? I would like it to be an automated process so I would like to know any way to p...
- 123 Views
- 5 replies
- 0 kudos
Latest Reply
Hi @madhumitha, Connecting Power BI semantic model output to Databricks can be done in a few steps.
Here are a couple of options:
Databricks Power Query Connector:
The new Databricks connector is natively integrated into Power BI. You can configu...
4 More Replies
- 34 Views
- 1 replies
- 0 kudos
We are a data consultancy. Our free trial period is currently getting over and we are still doing POC for one of our potential clients and focusing on providing expert services around databricks.1. Is there a possibility that we can extend the free t...
- 34 Views
- 1 replies
- 0 kudos
Latest Reply
hey @ashraf1395,
I suggest you contact your databricks representative or account manager.
- 14399 Views
- 3 replies
- 4 kudos
We are having Databricks Job running with main class and JAR file in it. Our JAR file code base is in Scala. Now, when our job starts running, we need to log Job ID and Run ID into the database for future purpose. How can we achieve this?
- 14399 Views
- 3 replies
- 4 kudos
Latest Reply
That article is for members only, can we also specify here how to do it (for those that are not medium members?). Thanks!
2 More Replies
- 31 Views
- 0 replies
- 0 kudos
Input SQL Script (assume any dialect) : SELECT b.se10,
b.se3,
b.se_aggrtr_indctr,
b.key_swipe_ind
FROM
(SELECT se10,
se3,
se_aggrtr_indctr,
ROW_NUMBER() OVER (PARTITION BY SE10
...
- 31 Views
- 0 replies
- 0 kudos
by
SreeG
• New Contributor II
- 295 Views
- 3 replies
- 0 kudos
HiI am facing issues when deploying work flows to different environment. The same works for Notebooks and Scripts, when deploying the work flows, it failed with "Authorization Failed. Your token may be expired or lack the valid scope". Anything shoul...
- 295 Views
- 3 replies
- 0 kudos
- 50 Views
- 1 replies
- 0 kudos
Hi,We have a DLT pipeline that has been running for a while with a Hive Metastore target that has stored billions of records. We'd like to move the data to a Unity Catalog. The documentation says "Existing pipelines that use the Hive metastore cannot...
- 50 Views
- 1 replies
- 0 kudos
Latest Reply
@MarkD good day!
I'm sorry, but according to the description, existing pipelines using the Hive metastore cannot be upgraded to use Unity Catalog. To migrate an existing pipeline that writes to Hive metastore, you must create a new pipeline and re-in...
- 2126 Views
- 4 replies
- 3 kudos
I am currently working with a VNET injected databricks workspace. At the moment I have mounted a the databricks cluster on an ADLS G2 resource. When running notebooks on a single node that read, transform, and write data we do not encounter any probl...
- 2126 Views
- 4 replies
- 3 kudos
Latest Reply
@TheDataDexter Did you find a solution to your problem? I am facing the same issue
3 More Replies
by
Ameshj
• New Contributor II
- 345 Views
- 8 replies
- 0 kudos
I need help with migrating from dbfs on databricks to workspace. I am new to databricks and am struggling with what is on the links provided.My workspace.yml also has dbfs hard-coded. Included is a full deployment with great expectations.This was don...
- 345 Views
- 8 replies
- 0 kudos
Latest Reply
One of the other suggestions is to use Lakehouse Federation. It is possible it may be a driver issue (we will get to know from the logs)
7 More Replies
- 2051 Views
- 3 replies
- 0 kudos
Hello,I have currently a delta folder as a table with several columns that are nullable. I want to migrate data to the table and overwrite the content using Pyspark, add several new columns and make them not nullable. I have found a way to make the c...
- 2051 Views
- 3 replies
- 0 kudos
Latest Reply
Not sure if you found a solution, you can also try as below. In this case you pass the full path to the delta not the table itself.spark.sql(f"ALTER TABLE delta.`{full_delta_path}` ALTER column {column_name} SET NOT NULL")
2 More Replies
- 81 Views
- 1 replies
- 1 kudos
Is delta live tables/pipelines support oracle or external database connectivity ? i am getting Oracle Driver not found error. dlt not supporting maven install through asset bundles. ERRORs: 1) py4j.protocol.Py4JJavaError: An error occurred while call...
- 81 Views
- 1 replies
- 1 kudos
Latest Reply
Hi @venkata_kishore ,
As of now, DLT does not support Oracle, and one cannot install third-party libraries and JARs.
https://docs.databricks.com/en/delta-live-tables/unity-catalog.html#limitationsIf Lakehouse Federation has support for Oracle, then ...
by
Phani1
• Valued Contributor
- 33 Views
- 0 replies
- 0 kudos
Hi Team,Can you provide information about topology, centralized, and federated workspaces in databricks and how they are used?Regards,Janga
- 33 Views
- 0 replies
- 0 kudos
- 83 Views
- 0 replies
- 1 kudos
Hello,We are using delta live tables to ingest data from multiple business groups, each with different input file formats and parsing requirements. The input files are ingested from azure blob storage. Right now, we are only servicing three busines...
- 83 Views
- 0 replies
- 1 kudos