- 4958 Views
- 3 replies
- 0 kudos
ProblemI'm unable to authenticate against the https://accounts.cloud.databricks.com endpoint even though I'm an account admin. I need it to assign account level groups to workspaces via the workspace assignment api (https://api-docs.databricks.com/re...
- 4958 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @lasse l​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your ...
2 More Replies
- 1805 Views
- 1 replies
- 2 kudos
If I try to create a Volume, I get this error:Failed to access cloud storage: AbfsRestOperationException exceptionTraceId=fa207c57-db1a-406e-926f-4a7ff0e4afddWhen i try to create a table, I get this error:Error creating table[RequestId=4b8fedcf-24b3-...
- 1805 Views
- 1 replies
- 2 kudos
Latest Reply
Hi @meystingray,
• Databricks cannot access Azure storage, causing errors when creating a volume or table.• Storage container has Storage Blob Contributor Access, and Storage Account has access, but there may be setup issues.
• Troubleshooting steps...
- 2919 Views
- 3 replies
- 0 kudos
I am trying to follow along Apache Spark Programming training module where the instructor creates events table from a parquet file like this:%sql
CREATE TABLE IF NOT EXISTS events USING parquet OPTIONS (path "/mnt/training/ecommerce/events/events.par...
- 2919 Views
- 3 replies
- 0 kudos
Latest Reply
@Kaniz Thanks for your response. I didn't provide cloud file system scheme in the path while creating the table using DataFrame API, but I was still able to create the table. %python
# File location and type
file_location = "/mnt/training/ecommerce/...
2 More Replies
- 1044 Views
- 3 replies
- 0 kudos
When I try to use a third party JAR on an Azure shared cluster - which is installed via Maven and I can successfully import - , I get the following message: py4j.security.Py4JSecurityException: Method public static org.apache.spark.sql.Column com.da...
- 1044 Views
- 3 replies
- 0 kudos
Latest Reply
Thanks Kaniz.I must use a shared cluster because I'm reading from a DLT table stored in a Unity Catalog.https://docs.databricks.com/en/data-governance/unity-catalog/compute.htmlMy understanding is that shared clusters are enforcing the Py4J policy I ...
2 More Replies
by
alemo
• New Contributor III
- 974 Views
- 3 replies
- 1 kudos
I try to build a DLT in UC with Kinesis as producer.My first table looks like: @dlt.create_table( table_properties={ "pipelines.autoOptimize.managed": "true" }, spark_conf={"spark.databricks.delta.schema.autoMerge.enabled": "true"},)def feed_chu...
- 974 Views
- 3 replies
- 1 kudos
Latest Reply
If you use the "Preview" Channel in the "Advanced" section of the DLT Pipeline, this error should resolve itself. This fix is planned to make it into the "Current" channel by Aug 31, 2023
2 More Replies
by
vroste
• New Contributor III
- 1012 Views
- 1 replies
- 1 kudos
I have a DLT that runs every day and an automatically executed maintenance job that runs within 24 hours every day. The maintenance operations are costly, is it possible to change the schedule to once a week or so?
- 1012 Views
- 1 replies
- 1 kudos
Latest Reply
Hi @vroste, Based on the information provided, it is impossible to directly change the frequency of the automatic maintenance tasks performed by Delta Live Tables (DLT) from every 24 hours to once a week. The system is designed to perform maintenance...
- 1836 Views
- 3 replies
- 3 kudos
In my DLT pipeline outlined below which generically cleans identifier tables, after successfully creating initial streaming tables from the append-only sources, fails when trying to create the second cleaned tables witht the following:It'**bleep** cl...
- 1836 Views
- 3 replies
- 3 kudos
Latest Reply
Hi @scvbelle The error message you're seeing is caused by an IllegalArgumentException error due to the restriction in Azure Blob File System (ABFS) that does not allow files or directories to end with a dot. This error is thrown by the trailingPeriod...
2 More Replies
by
kinsun
• New Contributor II
- 7290 Views
- 5 replies
- 0 kudos
Dear Databricks Expert,I got some doubts when dealing with DBFS and Local File System.Case01: Copy a file from ADLS to DBFS. I am able to do so through the below python codes:#spark.conf.set("fs.azure.account.auth.type", "OAuth") spark.conf.set("fs.a...
- 7290 Views
- 5 replies
- 0 kudos
Latest Reply
Hi @KS LAU​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your q...
4 More Replies
- 2858 Views
- 7 replies
- 4 kudos
How do I change my Databricks Community user name?
- 2858 Views
- 7 replies
- 4 kudos
Latest Reply
Hi Sujitha,Thanks for the quick help! I cleaned the caches, and it works now.
6 More Replies
- 4411 Views
- 9 replies
- 5 kudos
I have a few Databricks clusters, some share a single Hive Metastore (HMS), call them PROD_CLUSTERS, and an additional cluster, ADHOC_CLUSTER, which has its own HMS. All my data is stored in S3, as Databricks delta tables: PROD_CLUSTERS have read-wri...
- 4411 Views
- 9 replies
- 5 kudos
Latest Reply
Hi @Nino , To query HMS to get the full path for all data files of tables defined in that HMS, you can use the Hive MetaStore API. Specifically, you can use the GET_TABLE_FILES operation to retrieve the file metadata for a given table, including the ...
8 More Replies
by
Soma
• Valued Contributor
- 2438 Views
- 10 replies
- 2 kudos
We use pyspark streaming listener and it is lagging for 10 hrsThe data streamed in 10 am IST is logged at 10 PM IstCan someone explain how logging listener interface work
- 2438 Views
- 10 replies
- 2 kudos
Latest Reply
When you're experiencing lag in Spark Streaming, it means that the system is not processing data in real-time, and there is a delay in data processing. This delay can be caused by various factors, and diagnosing and addressing the issue requires care...
9 More Replies
- 9633 Views
- 15 replies
- 7 kudos
Hi,​Let's assume I have these things:Binary column containing protobuf-serialized dataThe .proto file including message definition​What different approaches have Databricks users chosen to deserialize the data? Python is the programming language that...
- 9633 Views
- 15 replies
- 7 kudos
Latest Reply
We've now added a native connector with parsing directly with Spark Dataframes. https://docs.databricks.com/en/structured-streaming/protocol-buffers.htmlfrom pyspark.sql.protobuf.functions import to_protobuf, from_protobuf
schema_registry_options = ...
14 More Replies
- 11952 Views
- 5 replies
- 9 kudos
I have a main databricks notebook that runs a handful of functions. In this notebook, I import a helper.py file that is in my same repo and when I execute the import everything looks fine. Inside my helper.py there's a function that leverages built-i...
- 11952 Views
- 5 replies
- 9 kudos
Latest Reply
Hi,i 'm facing similiar issue, when deploying via dbx.I have an helper notebook, that when executing it via jobs works fine (without any includes)while i deploy it via dbx (to same cluster), the helper notebook results withdbutils.fs.ls(path)NameEr...
4 More Replies
- 1647 Views
- 3 replies
- 1 kudos
Hi, I have Databricks cluster earlier connected to hive metastore and we have started migrating to Glue catalog.I'm facing an issue while creating table,Path must be absolute: <table-name>-__PLACEHOLDER__We have provided full access to glue and s3 in...
- 1647 Views
- 3 replies
- 1 kudos
Latest Reply
Hi @RC, The error message you're seeing suggests that the table path is not absolute. This could be due to how you create the table in the Glue Catalog. As per the given sources, when using AWS Glue Data Catalog as the metastore, it's recommended to...
2 More Replies
- 4156 Views
- 2 replies
- 0 kudos
I am building a machine learning model using sklearn Pipeline which includes a ColumnTransformer as a preprocessor before the actual model. Below is the code how the pipeline is created.transformers = []
num_pipe = Pipeline(steps=[
('imputer', Si...
- 4156 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Nasreddin, MLflow is compatible with sklearn Pipeline with multiple steps. The error you're encountering, "This ColumnTransformer instance is not fitted yet. Call’ fit’ with appropriate arguments before using this estimator." is likely because C...
1 More Replies