cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Local pyspark read data using jdbc driver returns column names only

amelia1
New Contributor II

Hello,

I have an Azure sql warehouse serverless instance that I can connect to using databricks-sql-connector. But, when I try to use pyspark and jdbc driver url, I can't read or write.

See my code below


def get_jdbc_url():
# Define your Databricks parameters
server_hostname, http_path, access_token = get_connection_configs()
default_catalog = "researchers"
# Build the Spark JDBC URL
jdbc_url = f"jdbc:databricks://{server_hostname}:443;httpPath={http_path};AuthMech=3;UID=token;PWD={access_token};ConnCatalog={default_catalog}"
return jdbc_url


def get_spark() -> SparkSession:
# Initialize SparkSession
conf = SparkConf()
# unfortunately, there is no automatic way for pyspark to download the jar itself
# TODO set to your own jar path
conf.set("spark.jars",
"/home/username/repos/uwcip-research/code-examples/DatabricksJDBC42-2.6.38.1068/DatabricksJDBC42.jar")
conf.set("spark.driver.extraClassPath",
"/home/username/repos/uwcip-research/code-examples/DatabricksJDBC42-2.6.38.1068/DatabricksJDBC42.jar")
conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")

spark = SparkSession.builder \
.appName("connecting to databricks") \
.config(conf=conf) \
.getOrCreate()
return spark
def read_table():
spark = get_spark()
jdbc_url = get_jdbc_url()
dbtable = "researchers.dev.test_transcripts"

df = spark.read \
.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", dbtable) \
.load()

top5 = df.head(5)
print(top5)
return

Below is the error and read results I got when call read_table(). See that it simply returned column names 5 times. When I call df.printSchema(), it works fine.

ERROR StatusLogger Unrecognized conversion specifier [msg] starting at position 54 in conversion pattern.
ERROR StatusLogger Unrecognized format specifier [n]
ERROR StatusLogger Unrecognized conversion specifier [n] starting at position 56 in conversion pattern.
[Row(video_name='video_name', video_transcript='video_transcript'), Row(video_name='video_name', video_transcript='video_transcript'), Row(video_name='video_name', video_transcript='video_transcript'), Row(video_name='video_name', video_transcript='video_transcript'), Row(video_name='video_name', video_transcript='video_transcript')]

Additionally, probably unrelated logger error

ERROR StatusLogger Unable to create Lookup for bundle
java.lang.ClassCastException: class org.apache.logging.log4j.core.lookup.ResourceBundleLookup
at java.base/java.lang.Class.asSubclass(Class.java:3640)
at com.databricks.client.jdbc42.internal.apache.logging.log4j.core.lookup.Interpolator.<init>(Interpolator.java:84)
at com.databricks.client.jdbc42.internal.apache.logging.log4j.core.lookup.Interpolator.<init>(Interpolator.java:105)
at com.databricks.client.jdbc42.internal.apache.logging.log4j.core.config.AbstractConfiguration.<init>(AbstractConfiguration.java:135)
at com.databricks.client.jdbc42.internal.apache.logging.log4j.core.config.NullConfiguration.<init>(NullConfiguration.java:32)
at com.databricks.client.jdbc42.internal.apache.logging.log4j.core.LoggerContext.<clinit>(LoggerContext.java:74)

...

ERROR StatusLogger Unable to create Lookup for ctx
java.lang.ClassCastException: class org.apache.logging.log4j.core.lookup.ContextMapLookup
at java.base/java.lang.Class.asSubclass(Class.java:3640)
at com.databricks.client.jdbc42.internal.apache.logging.log4j.core.lookup.Interpolator.<init>(Interpolator.java:84)

 

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Hi @amelia1The error messages you provided indicate that there might be a problem with the log4j configuration and formatting. Additionally, the repeated column names suggest that there might be an issue with how the data is being retrieved.

Here are some steps to help you resolve this issue:

  1. Log4j Issue:

    • The log4j errors you’re seeing might not be directly related to the repeated column names, but it’s worth addressing them first.
    • Ensure that your log4j configuration is set up correctly. Make sure you have the appropriate log4j dependencies in your project.
    • If you’re using custom log4j configurations, double-check them for any misconfigurations.
  2. Column Names Issue:

    • The repeated column names could be due to incorrect column mappings or aliases.
    • Verify that the column names in your SQL query match the actual column names in your table. Any discrepancies can cause this behavior.
    • If your column names have spaces or special characters, enclose them with backticks (`) in your SQL query to avoid issues.
    • For example:
      sqlQuery = "SELECT `column1`, `column2`, `column3` FROM tableschema.tablename WHERE start_date >= '2023-11-11' AND end_date < '2023-11-15' LIMIT 3"
      
  3. Check Your JDBC URL:

    • Ensure that your JDBC URL is correctly formed and points to the right Databricks instance.
    • Double-check the dbtable parameter in your read_table() function. Make sure it corresponds to the correct table name in your Azure SQL warehouse.
  4. Spark Configuration:

    • Verify that your Spark configuration is set up correctly. Specifically, check the paths to the Databricks JDBC driver JAR files.
    • Make sure you’re using the latest version of the Databricks JDBC driver compatible with your Spark version.
  5. Legacy Column Naming (Databricks Runtime 7.0+):

  6. Test with a Simple Query:

    • To isolate the issue, try reading a simple query (e.g., SELECT * FROM your_table) and see if the problem persists.
    • Gradually add complexity to your query to identify the root cause.

Hopefully, this will help you resolve the issue and retrieve the data correctly. If you encounter any further issues or need additional assistance, feel free to ask! 😊

 
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!