Hi Community,
I'm new to Databricks and am trying to make and implement pipeline expectations, The pipelines work without errors and my job works. I've tried multiple ways to implement expectations, sql and python. I keep resolving the errors but end up with the same error. I'm working with the free trial version of Databricks. Is there a limitation to building expectations on the trial version? Are there table permissions in databricks I'm not taking into account? The order_2 table is a streaming table, are there limitations to applying expectations to streaming tables? My python code:
%python
from pyspark import pipelines as dp
from pyspark.sql.functions import col
@dp.table(
name="xyntrel_bronze.bronze.orders_2",
comment="Orders table with data quality constraints"
)
@dp.expect_or_fail("row count > 100", "COUNT(*) > 100")
@dp.expect_or_fail("customer_id not null", "customer_id IS NOT NULL")
def bronze_table():
return (
spark.readStream.table("xyntrel_bronze.bronze.orders_2")
.filter(col("order_id").isNotNull())
)
The complete error in json:
"timestamp": "2025-12-10T18:23:25.679Z",
"message": "Update 19907c is FAILED.",
"level": "ERROR",
"error": {
"exceptions": [
{
"message": "",
"error_class": "_UNCLASSIFIED_PYTHON_COMMAND_ERROR",
"short_message": ""
}
],
"fatal": true
},
"details": {
"update_progress": {
"state": "FAILED"
}
},
"event_type": "update_progress",
"maturity_level": "STABLE"
}
Thanks guys!