Hi @emanueol,
The error you are seeing ("Connection type JDBC requires environment settings") is because you selected "JDBC" as the connection type, which is a generic connector intended for databases that do not have a dedicated connection type in Databricks. JDBC connections require you to configure an environment with the appropriate JDBC driver JAR, which is why Databricks is asking for environment settings.
The good news is that Snowflake has its own dedicated, first-class connection type in Databricks, so you do not need a generic JDBC connection at all. Here is how to set it up properly.
OPTION 1: USE THE NATIVE SNOWFLAKE CONNECTION TYPE (RECOMMENDED)
Instead of selecting "JDBC" as the connection type, select "Snowflake" directly. Databricks has built-in support for Snowflake through Lakehouse Federation, which handles all the JDBC plumbing for you behind the scenes.
Through the Catalog Explorer UI:
1. Go to Catalog, click the (+) button, then select "Add a connection"
2. For Connection type, choose "Snowflake" (not JDBC)
3. Fill in the connection details:
- Host: your-account.snowflakecomputing.com
- Port: 443
- Authentication: Username/Password or OAuth
4. Click "Create connection"
Or through SQL:
CREATE CONNECTION snowflake_conn
TYPE SNOWFLAKE
OPTIONS (
host 'your-account.snowflakecomputing.com',
port '443',
user secret('your-scope', 'sf-user'),
password secret('your-scope', 'sf-password'),
sfWarehouse 'COMPUTE_WH'
);
Then create a foreign catalog to browse and query your Snowflake tables:
CREATE FOREIGN CATALOG snowflake_catalog
USING CONNECTION snowflake_conn
OPTIONS (database 'your_snowflake_db');
After that, you can query Snowflake tables directly:
SELECT * FROM snowflake_catalog.schema_name.table_name;
Docs: https://docs.databricks.com/en/query-federation/snowflake.html
OPTION 2: USE THE SNOWFLAKE CONNECTOR IN NOTEBOOKS
If you want to read Snowflake data into a DataFrame for transformations in a notebook, you can use the built-in Snowflake Spark connector. No connection object is needed for this approach:
snowflake_df = (spark.read
.format("snowflake")
.option("host", "your-account.snowflakecomputing.com")
.option("port", "443")
.option("user", "your_username")
.option("password", "your_password")
.option("sfWarehouse", "COMPUTE_WH")
.option("database", "your_database")
.option("schema", "your_schema")
.option("dbtable", "your_table")
.load()
)
snowflake_df.show()
The Snowflake connector is pre-installed on Databricks Runtime 11.3 LTS and above, so no additional driver setup is needed.
Docs: https://docs.databricks.com/en/connect/external-systems/snowflake.html
IMPORTANT NOTE ABOUT THE FREE TIER
Since you mentioned you are on the free Databricks tier, be aware of a few things:
- The free edition provides serverless compute and a Unity Catalog-enabled workspace, which is great.
- Lakehouse Federation (Option 1) requires a Pro or Serverless SQL warehouse, or a cluster running Databricks Runtime 13.3 LTS or above with Standard access mode.
- If you encounter permission errors when creating the connection, check that your account has the CREATE CONNECTION privilege on the metastore. On the free tier you should be the metastore admin by default.
- Make sure your Snowflake account is network-accessible from Databricks. The free tier has some outbound network restrictions, so if Snowflake connectivity is blocked, that could be a separate issue to troubleshoot.
Free edition limitations reference: https://docs.databricks.com/en/getting-started/free-edition-limitations.html
SUMMARY
The key fix is to change your connection type from "JDBC" to "Snowflake" in the connection setup wizard. The native Snowflake connection type handles everything for you without needing custom environment configurations or driver JARs.
Hope this helps get you connected. Let me know if you run into any other issues during setup.
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.