I am using the databricks jdbc driver to access a delta lake. The database URL specifies transportMode=http. I have experimented with setting different values of fetchSize on the java.sqlPreparedStatement object and have monitored memory use within m...
I think there is one spark configuration but I forgot right now Pelase try to utilized this doc maybe you get something- https://spark.apache.org/docs/latest/configuration.html
Hello. I am trying to establish a connection between DBeaver and Databricks. I followed the steps in DBeaver integration with Databricks | Databricks on AWS, but I get the following error while testing the connection: Could anyone provide any insight...
I am getting the following error in my Java application.java.sql.SQLException: [Databricks][DatabricksJDBCDriver](500618) Error occured while deserializing arrow data: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not availableI beli...
Hey,I'm trying to create a DLT pipeline that reads from a JDBC source, and the code I'm using looks something like this in python:import dlt
@dlt.table
def table_name():
driver = 'oracle.jdbc.driver.OracleDriver'
url = '...'
query = 'SELECT ......
Hi @Reda Bitar​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else bricksters will get back to you soon. Thanks
Hi all,I would like to improve the way I use JDBC credenditial information (ID/PW, host, port, etc)Where do you guys usually store and use the jdbc credentials?Thanks for your help in advance!
Hi @Kwangwon Yi​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...
Please how do I add a notebook to the jdbc url in order to run queries externally?jdbc:databricks://dbc-a1b2345c-d6e7.cloud.databricks.com:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/1234567890123456/1234-567890-reef123;AuthMech=3;...
Not sure if it is possible.Alternatively you could try adding your notebook to a job, and then triggering that job via jobs api.Please refer below link Jobs API 2.1 | Databricks on AWS
Access Data in Databricks Using an Application or your Favorite BI ToolYou can leverage Partner Connect for easy, low-configuration connections to some of the most popular BI tools through our optimized connectors. Alternatively, you can follow these...
I am able to connect to the cluster, browse its hive catalog, see tables/views and columns/datatypesRunning a simple select statement from a view on a parquet file produces this error and no other results:"SQL Error [500540] [HY000]: [Databricks][Dat...
Update. I have tried SQL Workbench/J and encountered exactly the same error(s) as with Dbeaver. I have also tried JetBrains DataGrip and it worked flawlessly. Able to connect, browse the databases and query tables/views. https://docs.microsoft.com/en...
I'm running a Java application that registers a CSV table with HIVE and then checks the number of rows imported. Its done in several steps.:Statement stmt = con.createStatement();....stmt.execute( "CREATE TABLE ( <definition> < > );.....ResultSet rs...
@Reto Matter​ Are you running a jar job or using dbconnect to run java code? Please provide how are you trying to make a connection and full exception stack trace.
Hi All, I think I might be missing something in regard to No Pubic IP Clusters. I have set this option on a workspace (Azure) and setup the appropriate subnets. To my surprise, when I went to setup a JDBC connection to the cluster the JDBC connec...
Hey there @Ashley Betts​ Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too.Cheers!
Hi all!Before we used Databricks Repos we used the run magic to run various utility python functions from one notebook inside other notebooks, fex like reading from a jdbc connections. We now plan to switch to repos to utilize the fantastic CI/CD pos...
Thats...odd. I was sure I had tried that, but now it works somehow. I guess it has to be that now I did it with double quotation marks. Thanks anyway! Works like a charm.
I'm using the Databricks JDBC driver recently made available via Maven:https://mvnrepository.com/artifact/com.databricks/databricks-jdbc/2.6.25While trying to create a table with `GENERATED` columns I receive the following exception:Caused by: java.s...
I was under the impression that this has been recognised as a BUG and is being handled by Databricks.What do I need to do for reporting the issue officially as a BUG?
Hello. I'm fairly new to Databricks and Spark.I have a requirement to connect to Databricks using JDBC and that works perfectly using the driver I downloaded from the Databricks website ("com.simba.spark.jdbc.Driver")What I would like to do now is ha...
Hi @Gurps Bassi​ , Just a friendly follow-up. Do you still need help, or do @Hubert Dudek (Customer)​ and @Werner Stinckens​ 's responses help you find the solution? Please let us know.
I have a Java program like this to test out the Databricks JDBC connection with the Databricks JDBC driver. Connection connection = null;
try {
Class.forName(driver);
connection = DriverManager.getConnection(url...
Hi @Jose Gonzalez​ ,This similar issue in snowflake in JDBC is a good reference, I was able to get this to work in Java OpenJDK 17 by having this JVM option specified:--add-opens=java.base/java.nio=ALL-UNNAMEDAlthough I came across another issue with...