Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-05-2026 10:31 AM
The Code:
from pyspark import pipelines as dp
from pyspark import pipelines as dp
from pyspark.sql import functions
from pyspark.sql.functions import current_date
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, FloatType, BooleanType, ArrayType
@dp.table()
def ingest():
df = spark.read.table('stream.stream_learning.states_stream')
df = df.withColumn('processado',current_date())
return df
------------------------------------------------------------------------------------------------------------------------------
The error:
------------------------------------------------------------------------------------------------------------------------------
The error:
Category: Error
Message: Encountered an error with Unity Catalog while setting up the pipeline on cluster 0225-015320-jpn6b927-v2n.
Ensure that your Unity Catalog configuration is correct, and that required resources (e.g., catalog, schema) exist and are accessible.
Also verify that the cluster has appropriate permissions to access Unity Catalog.
Details: PERMISSION_DENIED: Can not move tables across arclight catalogs
Error class: UNITY_CATALOG_INITIALIZATION_FAILED
SQL state: 56000
Data Analyst | Python, PySpark & AWS | MBA em Data Science (USP/ Esalq) | Databricks & Infraestrutura de Dados