Hi,
Is it possible to have a Spark session, that can be to used query the Unity Catalog etc, available within a Model Serving?
I have an MLFlow Pyfunc model that needs to get data from a Feature Table as part of its `.predict()` method. See my earlier question for more context, and why I can't use the Feature Lookups via the Feature Engineering client.
My solution following that was to instead just query for the historical data I need within the `.predict()` method via `spark.read.table()`.
This works fine within a notebook environment, that already has a Spark session created (with access to the Unity Catalog).
However, when I deploy a model serving for the model, and try to use it for inference -- I ultimately get the following exception: `Exception: No SparkSession Available!`. Presumably because the serving's environment does not have any Spark session created.
(I get the same error if I try the MLFlow validation `mlflow.models.predict()` function).
I suppose one alternative solution could be to create a Feature Serving endpoint for the Unity Catalog table I need to query, then query that from within my model's `predict()` method? Is there a convenient way of handling this, or will I simply need to send a POST request to the appropriate URL?