- 4805 Views
- 4 replies
- 5 kudos
We use SQL Server to store data. I would like to connect to SQL to pull manipulate and sometimes push data back. I've seen some examples online of connecting but I cannot successfully re-create.
- 4805 Views
- 4 replies
- 5 kudos
Latest Reply
Junee
New Contributor III
You can use jTDS library from maven, add this to your cluster. Once installed, you can write the below code to connect to your Database.Code in Scala will be:import java.util.Properties
val driverClass = "net.sourceforge.jtds.jdbc.Driver"
val serve...
3 More Replies
- 15299 Views
- 12 replies
- 10 kudos
Is it possible to connect to SQL Server on-premise (Not Azure) from Databricks?I tried to ping my virtualbox VM (with Windows Server 2022) from within Databricks and the request timed out.%sh
ping 122.138.0.14This is what my connection might look l...
- 15299 Views
- 12 replies
- 10 kudos
Latest Reply
I tried to connect to localhost sql server through databricks community edition, but it failed. I have created an IP rule on port 1433 allowed inbound connection from all public network, but still didn't connect. I tried locally using python its work...
11 More Replies
by
Hardy
• New Contributor III
- 7738 Views
- 5 replies
- 6 kudos
I am trying to connect to SQL through JDBC from databricks notebook. (Below is my notebook command)val df = spark.read.jdbc(jdbcUrl, "[MyTableName]", connectionProperties)
println(df.schema)When I execute this command, with DBR 10.4 LTS it works fin...
- 7738 Views
- 5 replies
- 6 kudos
Latest Reply
Try to add the following parameters to your SQL connection string. It fixed my problem for 13.X and 12.X;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;
4 More Replies
by
Kazer
• New Contributor III
- 7482 Views
- 2 replies
- 1 kudos
Hi. I am trying to read from our Microsoft SQL Server from Azure Databricks via spark.read.jdbc() as described here: Query databases using JDBC - Azure Databricks | Microsoft Learn. The SQL Server is on an Azure VM in a virtual network peered with th...
- 7482 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Kazer ,Even if I use a new table name, I get the same error. Do you have any suggestions?Thanks,
1 More Replies
- 23312 Views
- 5 replies
- 5 kudos
I'm trying to find the best strategy for handling big data sets. In this case I have something that is 450 million records. I'm pulling the data from SQL Server very quickly but when I try to push the data to the Delta Table OR a Azure Container the...
- 23312 Views
- 5 replies
- 5 kudos
Latest Reply
I think you should consult experts in Big Data for advice on this issue
4 More Replies
- 2448 Views
- 2 replies
- 1 kudos
Hi All, I hope you're super well. I need your recommendations and solution for my problem.I am using a Databricks instance DS12_v2 which has 28GB RAM and 4 cores. I am ingesting 7.2 million rows into a SQL Server table and it is taking 57 min - 1 hou...
- 2448 Views
- 2 replies
- 1 kudos
Latest Reply
You can try to use BULK INSERT.https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver16Also using Data Factory instead of Databricks for the copy can be helpful.
1 More Replies
- 9838 Views
- 3 replies
- 3 kudos
How do I connect to a on-premise SQL server using window authentication from a databricks notebook
- 9838 Views
- 3 replies
- 3 kudos
Latest Reply
We should have network setup from databricks Vnet to the on-prem SQL server. Then the connection from the databricks notebook using JDBC using Windows authenticated username/password - https://docs.microsoft.com/en-us/azure/databricks/data/data-sourc...
2 More Replies
- 2321 Views
- 3 replies
- 2 kudos
If I creat a table using the code below: CREATE TABLE IF NOT EXISTS jdbcTableusing org.apache.spark.sql.jdbcoptions( url "sql_server_url", dbtable "sqlserverTable", user "username", password "password")will jdbcTable always be automatically sync...
- 2321 Views
- 3 replies
- 2 kudos
Latest Reply
Hi @andrew li There is a feature introduced from DBR11 where you can directly ingest the data to the table from a selected list of sources. As you are creating a table, I believe this command will create a managed table by loading the data from the...
2 More Replies
- 1434 Views
- 1 replies
- 0 kudos
jdbcHostname="478"jdbcPort=1433jdbcDatabase="Onprem_AzureDB"jdbcUsername="upendra"jdbcPassword="upendrakumar"jdbcDriver="com.microsoft.sqlserver.jdbc.SQLServerDriver"jdbcUrl=f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};databaseName={jdbcDatabase};use...
- 1434 Views
- 1 replies
- 0 kudos
Latest Reply
Hi, Could you please verify the network connectivity from Databricks to the SQL server? Please make sure SQL:port is allowed in your firewall rules or security groups.
by
Spauk
• New Contributor II
- 19853 Views
- 5 replies
- 7 kudos
We moved in Databricks since few months from now, and before that we were in SQL Server.So, all our tables and databases follow the "camel case" rule.Apparently, in Databricks the rule is "lower case with underscore".Where can we find an official doc...
- 19853 Views
- 5 replies
- 7 kudos
Latest Reply
Hi @Salah KHALFALLAH , looking at the documentation it appears that Databricks' preferred naming convention is lowercase and underscores as you mentioned.The reason for this is most likely because Databricks uses Hive Metastore, which is case insens...
4 More Replies
by
pavanb
• New Contributor II
- 11119 Views
- 3 replies
- 3 kudos
Hi All, All of a sudden in our Databricks dev environment, we are getting exceptions related to memory such as out of memory , result too large etc.Also, the error message is not helping to identify the issue.Can someone please guide on what would be...
- 11119 Views
- 3 replies
- 3 kudos
Latest Reply
Thanks for the response @Hubert Dudek .if i run the same code in test environment , its getting successfully completed and in dev its giving out of memory issue. Also the configuration of test nand dev environment is exactly same.
2 More Replies
- 11308 Views
- 6 replies
- 3 kudos
Hi all,there is a random error when pushing data from Databricks to a Azure SQL Database.Anyone else also had this problem? Any ideas are appreciated.See stacktrace attached.Target: Azure SQL Database, Standard S6: 400 DTUsDatabricks Cluster config:"...
- 11308 Views
- 6 replies
- 3 kudos
Latest Reply
@Pearl Ubaru TLS 1.1 is already deprecated.Are there any concerns from your side to set TLS 1.2 in the connection string?
5 More Replies
by
lizou
• Contributor II
- 3844 Views
- 2 replies
- 2 kudos
How to find the identity column seed value? A seed value is required when we need specifically like start generating new values from a number (most likely we need to keep the original key values when data is reloaded from another source, and any new ...
- 3844 Views
- 2 replies
- 2 kudos
Latest Reply
found it, thanks!of course, it will be nice to have a sql function available to query the value.example\"delta.identity.start\":984888,\"delta.identity.highWaterMark\":1004409,\"comment\":\"identity\",\"delta.identity.step\":1}
1 More Replies
- 12200 Views
- 6 replies
- 7 kudos
I was trying to create a variable and i got the following error -command - SET a = 5;Error -Error running queryConfiguration a is not available.
- 12200 Views
- 6 replies
- 7 kudos
Latest Reply
@Sudeshna Bhakat what @Joseph Kambourakis described works on clusters but is restricted on Databricks SQL endpoints i.e. only a limited number of SET commands are allowed. I suggest you explore the curly-braces (e.g. {{ my_variable }}) in Databrick...
5 More Replies