<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Model Serving Only Shows WARNING/ERROR Logs in Machine Learning</title>
    <link>https://community.databricks.com/t5/machine-learning/model-serving-only-shows-warning-error-logs/m-p/149286#M4555</link>
    <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;I’m deploying a custom model using mlflow.pyfunc.PythonModel in Databricks Model Serving. Inside my wrapper code, I configured logging as follows:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;logging.basicConfig(
    stream=sys.stdout,
    level=logging.INFO,
    format='%(asctime)s [%(levelname)s] %(name)s: %(message)s',
    force=True
)
logger = logging.getLogger()&lt;/LI-CODE&gt;&lt;P&gt;However, in the Model Serving service logs I can only see logger.warning() and&amp;nbsp;logger.error() messages.&lt;/P&gt;&lt;P&gt;I would like to understand:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;What is the default logging level for Model Serving endpoints?&lt;/LI&gt;&lt;LI&gt;Is there a supported way to enable INFO level logs?&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;If configurable, how can I ensure that all INFO logs (including those from modules used inside the wrapped mlflow.pyfunc.PythonModel) are visible?&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Any guidance or documentation reference would be greatly appreciated.&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;</description>
    <pubDate>Wed, 25 Feb 2026 12:14:56 GMT</pubDate>
    <dc:creator>fede_bia</dc:creator>
    <dc:date>2026-02-25T12:14:56Z</dc:date>
    <item>
      <title>Model Serving Only Shows WARNING/ERROR Logs</title>
      <link>https://community.databricks.com/t5/machine-learning/model-serving-only-shows-warning-error-logs/m-p/149286#M4555</link>
      <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;I’m deploying a custom model using mlflow.pyfunc.PythonModel in Databricks Model Serving. Inside my wrapper code, I configured logging as follows:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;logging.basicConfig(
    stream=sys.stdout,
    level=logging.INFO,
    format='%(asctime)s [%(levelname)s] %(name)s: %(message)s',
    force=True
)
logger = logging.getLogger()&lt;/LI-CODE&gt;&lt;P&gt;However, in the Model Serving service logs I can only see logger.warning() and&amp;nbsp;logger.error() messages.&lt;/P&gt;&lt;P&gt;I would like to understand:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;What is the default logging level for Model Serving endpoints?&lt;/LI&gt;&lt;LI&gt;Is there a supported way to enable INFO level logs?&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;If configurable, how can I ensure that all INFO logs (including those from modules used inside the wrapped mlflow.pyfunc.PythonModel) are visible?&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Any guidance or documentation reference would be greatly appreciated.&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;</description>
      <pubDate>Wed, 25 Feb 2026 12:14:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/model-serving-only-shows-warning-error-logs/m-p/149286#M4555</guid>
      <dc:creator>fede_bia</dc:creator>
      <dc:date>2026-02-25T12:14:56Z</dc:date>
    </item>
    <item>
      <title>Re: Model Serving Only Shows WARNING/ERROR Logs</title>
      <link>https://community.databricks.com/t5/machine-learning/model-serving-only-shows-warning-error-logs/m-p/150151#M4572</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/217647"&gt;@fede_bia&lt;/a&gt;&lt;/P&gt;
&lt;P&gt;This is worth walking through carefully. this is a common source of confusion when deploying custom models on Databricks Model Serving.&lt;/P&gt;
&lt;P&gt;SHORT ANSWER&lt;/P&gt;
&lt;P&gt;The default root logging level for Model Serving endpoints is set to WARNING. That is why you only see logger.warning() and logger.error() messages in the Logs tab -- your INFO-level messages are being filtered out before they reach the log output.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;WHY THIS HAPPENS&lt;/P&gt;
&lt;P&gt;Databricks Model Serving containers set the root Python logger to WARNING level by default. Even though your code calls logging.basicConfig() with level=logging.INFO, the serving infrastructure's own logging configuration can override or take precedence, especially since the container environment configures logging before your model code runs. The official documentation confirms this: it recommends using logging.warning(...) or logging.error(...) "for immediate display in the logs."&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;HOW TO ENABLE INFO-LEVEL LOGS&lt;/P&gt;
&lt;P&gt;You can override the root logger level inside your model's load_context() method, which runs when the model is first loaded into the serving container. This is the supported approach:&lt;/P&gt;
&lt;P&gt;import logging&lt;BR /&gt;import mlflow&lt;/P&gt;
&lt;P&gt;class MyModel(mlflow.pyfunc.PythonModel):&lt;BR /&gt;def load_context(self, context):&lt;BR /&gt;root = logging.getLogger()&lt;BR /&gt;root.setLevel(logging.DEBUG)&lt;BR /&gt;for handler in root.handlers:&lt;BR /&gt;handler.setLevel(logging.DEBUG)&lt;/P&gt;
&lt;P&gt;# Your other initialization code here&lt;BR /&gt;self.logger = logging.getLogger(__name__)&lt;BR /&gt;self.logger.info("Model loaded successfully -- this should now appear!")&lt;/P&gt;
&lt;P&gt;def predict(self, context, model_input, params=None):&lt;BR /&gt;self.logger.info("Received inference request")&lt;BR /&gt;# your prediction logic&lt;BR /&gt;return result&lt;/P&gt;
&lt;P&gt;The key points here are:&lt;/P&gt;
&lt;P&gt;1. Use load_context(), not module-level code or __init__, to configure logging. This method runs after the serving container has set up its environment, so your overrides take effect.&lt;/P&gt;
&lt;P&gt;2. Reset BOTH the root logger level AND each handler's level. The container may attach handlers that have their own level filters set to WARNING, so just changing the logger level alone may not be enough.&lt;/P&gt;
&lt;P&gt;3. Set the level to DEBUG or INFO depending on your needs. Setting it to DEBUG will give you the most verbose output, which is helpful during initial deployment and troubleshooting.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;VIEWING THE LOGS&lt;/P&gt;
&lt;P&gt;Once you have reconfigured the logger as shown above, your INFO messages will appear in:&lt;/P&gt;
&lt;P&gt;- The "Logs" tab in the Serving UI (ephemeral service logs that capture stdout/stderr in real time)&lt;BR /&gt;- Via the REST API at GET /api/2.0/serving-endpoints/{name}/served-models/{served-model-name}/logs&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;PERSISTENT LOGGING WITH OPENTELEMETRY (OPTIONAL)&lt;/P&gt;
&lt;P&gt;If you need long-term log retention beyond what the ephemeral Logs tab provides, Databricks supports persisting logs to Unity Catalog Delta tables using OpenTelemetry. When enabled, your logs (including INFO and DEBUG if you set the level as described above) are written to a &amp;lt;prefix&amp;gt;_otel_logs table with columns like timestamp, severity_text, body, trace_id, and span_id. This is configured during endpoint creation through the Serving UI or REST API. This is especially useful for production debugging and compliance.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;A NOTE ON print() STATEMENTS&lt;/P&gt;
&lt;P&gt;Standard print() calls write to stdout and will also appear in the ephemeral service logs. However, using the Python logging module is recommended over print() because it gives you structured output with timestamps, log levels, and logger names, making it much easier to filter and debug.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;DOCUMENTATION REFERENCES&lt;/P&gt;
&lt;P&gt;- Monitor and diagnose serving endpoints (covers ephemeral logs, build logs, and OpenTelemetry): &lt;A href="https://docs.databricks.com/en/machine-learning/model-serving/monitor-diagnose-endpoints.html" target="_blank"&gt;https://docs.databricks.com/en/machine-learning/model-serving/monitor-diagnose-endpoints.html&lt;/A&gt;&lt;BR /&gt;- Persist logs to Unity Catalog with OpenTelemetry: &lt;A href="https://docs.databricks.com/en/machine-learning/model-serving/custom-model-serving-uc-logs.html" target="_blank"&gt;https://docs.databricks.com/en/machine-learning/model-serving/custom-model-serving-uc-logs.html&lt;/A&gt;&lt;BR /&gt;- Debug model serving endpoints: &lt;A href="https://docs.databricks.com/en/machine-learning/model-serving/model-serving-debug.html" target="_blank"&gt;https://docs.databricks.com/en/machine-learning/model-serving/model-serving-debug.html&lt;/A&gt;&lt;BR /&gt;- Serving endpoints logs API reference: &lt;A href="https://docs.databricks.com/api/workspace/servingendpoints/logs" target="_blank"&gt;https://docs.databricks.com/api/workspace/servingendpoints/logs&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Hope this helps! Let us know if you run into any issues after making the change.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;</description>
      <pubDate>Sun, 08 Mar 2026 05:01:49 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/model-serving-only-shows-warning-error-logs/m-p/150151#M4572</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-08T05:01:49Z</dc:date>
    </item>
  </channel>
</rss>

