cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks streamlit app for data ingestion in a table

manish_tanwar
New Contributor
I am using this code in a notebook to save data row on table. And it is working perfectly. And now I am using the same function to save data from a chatbot in streamlit chatbot application of databricks and I am getting error for 
ERROR ###################################

Failed to save to usdev_rnd.gme_luminai.feedback_tracker: [PARSE_SYNTAX_ERROR] Syntax error at or near '%'. SQLSTATE: 42601 (line 1, pos 149)

== SQL == INSERT INTO usdev_rnd.gme_luminai.feedback_tracker (Username, Email, UserID, Query, Response, ChatHistory, Timestamp, Feedback, Suggestions) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s) -----------------------------------------------------------------------------------------------------------------------------------------------------^^^
ERROR#####################################

And when I changed the value of %S to "?" in the app code it is failing to save the data. So, please help me to correct it or give any reason for the same.



# Import necessary libraries
import logging
import os
from databricks import sql
from datetime import datetime

# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# Databricks connection configuration (Consider using Databricks Secrets)
DATABRICKS_SERVER_HOSTNAME = os.getenv("DATABRICKS_SERVER_HOSTNAME")
DATABRICKS_HTTP_PATH = os.getenv("DATABRICKS_HTTP_PATH")
DATABRICKS_ACCESS_TOKEN = os.getenv("DATABRICKS_ACCESS_TOKEN")


## Function to save data to the database
def save_to_database(table, data😞
    try:
        with sql.connect(server_hostname=DATABRICKS_SERVER_HOSTNAME,
                         http_path=DATABRICKS_HTTP_PATH,
                         access_token=DATABRICKS_ACCESS_TOKEN) as conn:
            with conn.cursor() as cur:
                columns = ", ".join(data.keys())
                placeholders = ", ".join(["%s"] * len(data))
                query = f"INSERT INTO {table} ({columns}) VALUES ({placeholders})"
                logger.info(f"SQL Query: {query}")
                logger.info(f"Data: {data}")
                data_values = list(data.values())
                cur.execute(query, tuple(data_values))
                conn.commit()
        logger.info(f"Data saved to {table} successfully.")
    except Exception as e:
        logger.error(f"Failed to save to {table}: {e}")
        print(f"Failed to save to {table}: {e}")

# Example usage
if __name__ == "__main__":
    feedback_data = {
        "Username": "user123",
        "Email": "user@example.com",
        "UserID": "abc-456",
        "Query": "What is the capital of France?",
        "Response": "The capital of France is Paris.",
        "ChatHistory": "[{'user': '...', 'bot': '...'}]",
        "Timestamp": datetime.now(),
        "Feedback": "1",
        "Suggestions": "Test1"
    }
    table_name = "catalog_name.schema_name.feedback_tracker"
    if DATABRICKS_SERVER_HOSTNAME and DATABRICKS_HTTP_PATH and DATABRICKS_ACCESS_TOKEN:
        save_to_database(table_name, feedback_data)
    else:
        logger.warning("Databricks connection parameters not found. Cannot save to database.")
        print("Databricks connection parameters not found. Cannot save to database.")
5 REPLIES 5

manish_tanwar
New Contributor

Thanks for passing by I have solved the issue ....

jaideep
Databricks Employee
Databricks Employee

HI @manish_tanwar - Do you mind posting the code for others' reference? Thank you!

pradeepvatsvk
New Contributor III

Hi @manish_tanwar  how can we work with streamlit apps in databricks , i have a use case where i want to ingest data from different csv files and ingest it into delta tables

 

@pradeepvatsvk Well, it quite simple if you 

1. Create tables with matching column names of csv files

2. Change all the csvs to dictionaries and

3. Run the save as above against corresponding to tables 

That should work fine

Are you hosting/building your streamlit application in databricks apps??

 

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now