I'm trying to connect to NetSuite2.com using Pyspark from a Databricks Notebook utilizing a JDBC driver.
I was successful in setting up my DBVisualizer connection by installing the JDBC Driver (JAR) and generating the password with the one-time hashing NONCE as required.
However, when I tried to run the same URL, JAR, and Password (utilizing NONCE) in pyspark (within Databricks)--- I came up with the following error:
SparkException: Job aborted due to stage failure: Task 0 in stage 36.0 failed 4 times, most recent failure: Lost task 0.3 in stage 36.0 (TID ***) (*** executor 1): java.sql.SQLException: [NetSuite][SuiteAnalytics Connect JDBC Driver][OpenAccess SDK SQL Engine]Failed to login using TBA. Error ticket# m3z*********cp2f[232]
I followed the password regeneration rules the same I did for DBVisualizer, but this still did not work. I tried multiple combinations in CustomProperties and keep getting this error.
Please look at the following code and identify the error and how to correct it:
pwd = generate_tba_password(account_id, consumer_key, token_id, consumer_secret, token_secret)
# Construct the JDBC URL
jdbc_url = f"jdbc:ns://{host}:{port};ServerDataSource=NetSuite2.com;Encrypted=1;NegotiateSSLClose=false;" \
f"CustomProperties=(AccountID={account_id};RoleID={role_id};AuthType=TOKEN;" \
f"TokenID={token_id};TokenSecret={token_secret};ConsumerKey={consumer_key};ConsumerSecret={consumer_secret})"
# Read data from NetSuite
df = spark.read.format("jdbc") \
.option("url", jdbc_url) \
.option("User", "TBA") \
.option("Password", pwd) \
.option("driver", "com.netsuite.jdbc.openaccess.OpenAccessDriver") \
.option("dbtable", "customer") \
.load()
df.show()