Facing challenges in leveraging Databricks for serving, logging, and monitoring OpenAI usage within an Azure environment. Specifically, encountering issues with Inference Tables not being enabled in the UI when creating serving endpoints with external providers. Implemented a workaround involving a custom PyFunc class and API calls for model registration and serving endpoint creation. Seeking insights or solutions to streamline the process and enable efficient logging and monitoring of OpenAI usage. Any guidance or expertise on this matter would be greatly appreciated