Hi,For a PySpark model, which involves also a pipeline, and that I want to register with mlflow, I am using a pyfunc wrapper.Steps I followed:1. Pipeline and model serialization and logging (using Volume locally, the logging will be performed in dbfs...
The following assignment:from langchain.sql_database import SQLDatabasedbase = SQLDatabase.from_databricks(catalog=catalog, schema=db,host=host, api_token=token,)fails with ValueError: invalid literal for int() with base 10: ''because ofcls._assert_p...
I am trying to find a way to locally download the model artifacts that build a chatbot chain registered with MLflow in Databricks, so that I can preserve the whole structure (chain -> model -> steps -> yaml & pkl files).There is a mention in a contri...
Hi,I cannot see the query execution time in the response to the "api/2.0/sql/history/queries" request.Basically, I get only the following fields:{"next_page_token":...,"has_next_page":...,"res":[ { "query_id":..., "status":.., "query_tex...
Hi @Retired_mod You may be right regarding SQLAlchemy, the trace hints at something related:> /lib/python3.10/site-packages/langchain_community/utilities/sql_database.py", line 133, in from_uri > """Construct a SQLAlchemy engine from URI."""But I don...
Hi,I ran also into such an issue. I would find very useful to be able to see also the errors issued in the prebuild stage.In any case, if it may help, eventually I found out through "trial and error" that the problem was caused by an incompatible ver...
OK, eventually I found a solution. I write it below, whether somebody will need it. Basically, if in the download_artifacts method the local directory is an existing and accessible one in the DBFS, the process will work as expected.import os
# Con...