Hi @Dnirmania, You could achieve something similar using this UDF:%sql
CREATE OR REPLACE FUNCTION ryanlakehouse.default.column_masking(column_value STRING, groups_str String)
RETURNS STRING
LANGUAGE SQL
COMMENT 'Return the column value if use...
Hi @Amodak91, you could use the %run magic command from within the downstream notebook and call the upstream notebook thus having it run in the same context and have all it's variables accessible including the dataframe without needing to persist it....
Hi @Mangeysh,You could achieve this using Databricks SQL Statement Execution API. I would recommend going through the docs and looking at the functionality and limitations and see if it serves your need before planning to develop your own APIs.
Hi @ChristianRRL, you could get this information using the dynamic value references {{job.trigger.type}}In your task settings, assign it to a parameter.And then you could access it from within your notebook using dbutils widgets
Hi @ChristianRRL, results of sql cell are automatically made available as a python dataframe using the _sqldf variable. You can read more about it here. For the second part not sure why you would need it when you can simply run the query like:spark.s...