<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ODBC driver installation - help needed in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/odbc-driver-installation-help-needed/m-p/150119#M53251</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/101716"&gt;@DylanStout&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;This is worth walking through carefully. It sounds like you have already done solid research on the constraints. Let me walk you through the most likely reason your init script is hanging and provide a complete working approach for offline ODBC driver installation.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;WHY YOUR INIT SCRIPT IS HANGING&lt;/P&gt;
&lt;P&gt;The most common reason an init script gets stuck on "Running Init Scripts" when installing the Microsoft ODBC driver .deb package is the EULA (End User License Agreement) prompt. When you run dpkg -i on the msodbcsql17 package, it triggers a debconf prompt asking you to accept the Microsoft EULA. Since init scripts run non-interactively, that prompt blocks forever waiting for input, which is exactly the "stuck" behavior you are seeing.&lt;/P&gt;
&lt;P&gt;The "|| true" at the end of your dpkg command prevents it from returning a non-zero exit code (which would fail the cluster), but it does not prevent the interactive prompt from hanging the process.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;THE FIX: ACCEPT THE EULA NON-INTERACTIVELY&lt;/P&gt;
&lt;P&gt;You need to pre-accept the EULA before running dpkg. There are two ways to do this:&lt;/P&gt;
&lt;P&gt;Option A: Use the ACCEPT_EULA environment variable&lt;/P&gt;
&lt;P&gt;export ACCEPT_EULA=Y&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb"&lt;/P&gt;
&lt;P&gt;Option B: Use debconf-set-selections to pre-seed the answer&lt;/P&gt;
&lt;P&gt;echo "msodbcsql17 msodbcsql/ACCEPT_EULA boolean true" | debconf-set-selections&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb"&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;COMPLETE WORKING INIT SCRIPT FOR OFFLINE INSTALLATION&lt;/P&gt;
&lt;P&gt;Here is a complete init script that handles the EULA, dependencies, and error cases. Upload all required .deb files to a Unity Catalog Volume (recommended) or Workspace Files location, then reference them in the script.&lt;/P&gt;
&lt;P&gt;Step 1: Gather the required .deb packages&lt;/P&gt;
&lt;P&gt;The msodbcsql17 package depends on unixodbc (which itself depends on libodbc2, libodbcinst2, and odbcinst). On an Ubuntu-based Databricks Runtime, the unixodbc package and its sub-dependencies may already be present on the image, but in a fully offline environment you should have them available just in case. The packages you need are:&lt;/P&gt;
&lt;P&gt;- msodbcsql17_17.10.6.1-1_amd64.deb (the driver itself)&lt;BR /&gt;- unixodbc_*.deb (if not already installed)&lt;BR /&gt;- libodbc2_*.deb (dependency of unixodbc, if needed)&lt;BR /&gt;- libodbcinst2_*.deb (dependency of unixodbc, if needed)&lt;/P&gt;
&lt;P&gt;You can check which are already present by running "dpkg -l | grep unixodbc" in a notebook on a running cluster to see what is pre-installed.&lt;/P&gt;
&lt;P&gt;Step 2: Upload the packages&lt;/P&gt;
&lt;P&gt;Upload them to a Unity Catalog Volume, for example:&lt;/P&gt;
&lt;P&gt;/Volumes/my_catalog/my_schema/my_volume/odbc/&lt;/P&gt;
&lt;P&gt;Or to Workspace Files:&lt;/P&gt;
&lt;P&gt;/Workspace/Users/&amp;lt;your-user&amp;gt;/odbc/&lt;/P&gt;
&lt;P&gt;Step 3: Create the init script&lt;/P&gt;
&lt;P&gt;#!/bin/bash&lt;BR /&gt;set -e&lt;/P&gt;
&lt;P&gt;# Path where you uploaded the .deb packages&lt;BR /&gt;PKG_DIR="/Volumes/my_catalog/my_schema/my_volume/odbc"&lt;BR /&gt;# If using Workspace Files instead, use:&lt;BR /&gt;# PKG_DIR="/Workspace/Users/&amp;lt;your-user&amp;gt;/odbc"&lt;/P&gt;
&lt;P&gt;# Pre-accept the Microsoft EULA (this prevents the interactive hang)&lt;BR /&gt;export ACCEPT_EULA=Y&lt;/P&gt;
&lt;P&gt;# Install dependencies first if they are not already present&lt;BR /&gt;if ! dpkg -s unixodbc &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then&lt;BR /&gt;echo "Installing unixODBC dependencies..."&lt;BR /&gt;dpkg -i "${PKG_DIR}"/libodbc2_*.deb || true&lt;BR /&gt;dpkg -i "${PKG_DIR}"/libodbcinst2_*.deb || true&lt;BR /&gt;dpkg -i "${PKG_DIR}"/unixodbc_*.deb || true&lt;BR /&gt;fi&lt;/P&gt;
&lt;P&gt;# Install the Microsoft ODBC Driver 17&lt;BR /&gt;echo "Installing msodbcsql17..."&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb"&lt;/P&gt;
&lt;P&gt;# Verify the installation&lt;BR /&gt;odbcinst -q -d -n "ODBC Driver 17 for SQL Server"&lt;BR /&gt;echo "ODBC Driver 17 installed successfully."&lt;/P&gt;
&lt;P&gt;Step 4: Configure the init script on your cluster&lt;/P&gt;
&lt;P&gt;1. Go to your cluster configuration&lt;BR /&gt;2. Click Advanced Options&lt;BR /&gt;3. Go to the Init Scripts tab&lt;BR /&gt;4. Select your source (Volumes or Workspace) and enter the path to the script&lt;BR /&gt;5. Click Add, then Confirm and Restart&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;IMPORTANT NOTES&lt;/P&gt;
&lt;P&gt;1. Init scripts run as root, so you do not need sudo. The "requires superuser privilege" error you saw was from trying to run dpkg inside a Python notebook cell, which runs as a non-root user. Init scripts do not have this problem.&lt;/P&gt;
&lt;P&gt;2. Init scripts run on ALL nodes (both driver and workers), so pyodbc will be available everywhere.&lt;/P&gt;
&lt;P&gt;3. The installation does not persist across cluster restarts, which is expected. The init script runs every time the cluster starts, so this is handled automatically.&lt;/P&gt;
&lt;P&gt;4. Make sure the path to your packages is accessible from all nodes. Unity Catalog Volumes (recommended for Databricks Runtime 13.3 LTS and above) and Workspace Files are both accessible from all nodes during init script execution.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;ALTERNATIVE: CONSIDER USING JDBC INSTEAD&lt;/P&gt;
&lt;P&gt;If your goal is simply to read data from SQL Server into Databricks DataFrames, you may not need ODBC at all. Databricks has built-in JDBC support that does not require any additional driver installation. The Microsoft SQL Server JDBC driver is included in the Databricks Runtime by default. Here is an example:&lt;/P&gt;
&lt;P&gt;jdbc_url = "jdbc:sqlserver://&amp;lt;server&amp;gt;:&amp;lt;port&amp;gt;;databaseName=&amp;lt;database&amp;gt;"&lt;/P&gt;
&lt;P&gt;df = (spark.read&lt;BR /&gt;.format("jdbc")&lt;BR /&gt;.option("url", jdbc_url)&lt;BR /&gt;.option("dbtable", "&amp;lt;schema.table&amp;gt;")&lt;BR /&gt;.option("user", dbutils.secrets.get(scope="my_scope", key="sql_user"))&lt;BR /&gt;.option("password", dbutils.secrets.get(scope="my_scope", key="sql_password"))&lt;BR /&gt;.load()&lt;BR /&gt;)&lt;/P&gt;
&lt;P&gt;df.display()&lt;/P&gt;
&lt;P&gt;This approach works out of the box with no init scripts, no driver installation, and no internet access required. It also uses Databricks Secrets for secure credential management.&lt;/P&gt;
&lt;P&gt;If you specifically need pyodbc (for example, for executing stored procedures or DDL commands that are not supported via Spark JDBC), then the init script approach above is the way to go.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;DEBUGGING TIPS&lt;/P&gt;
&lt;P&gt;If you still have issues after updating the init script:&lt;/P&gt;
&lt;P&gt;1. Enable cluster log delivery in your cluster configuration so init script logs are persisted.&lt;/P&gt;
&lt;P&gt;2. Check the logs at: &amp;lt;cluster-log-path&amp;gt;/&amp;lt;cluster-id&amp;gt;/init_scripts/&lt;/P&gt;
&lt;P&gt;3. You can also check logs on a running cluster at: /databricks/init_scripts/&lt;/P&gt;
&lt;P&gt;4. To verify the driver is installed correctly from a notebook cell, run:&lt;/P&gt;
&lt;P&gt;import subprocess&lt;BR /&gt;result = subprocess.run(["odbcinst", "-q", "-d"], capture_output=True, text=True)&lt;BR /&gt;print(result.stdout)&lt;/P&gt;
&lt;P&gt;5. Test pyodbc connectivity:&lt;/P&gt;
&lt;P&gt;import pyodbc&lt;BR /&gt;conn = pyodbc.connect(&lt;BR /&gt;"DRIVER={ODBC Driver 17 for SQL Server};"&lt;BR /&gt;"SERVER=&amp;lt;your-server&amp;gt;;"&lt;BR /&gt;"DATABASE=&amp;lt;your-database&amp;gt;;"&lt;BR /&gt;"UID=&amp;lt;username&amp;gt;;"&lt;BR /&gt;"PWD=&amp;lt;password&amp;gt;"&lt;BR /&gt;)&lt;BR /&gt;cursor = conn.cursor()&lt;BR /&gt;cursor.execute("SELECT 1")&lt;BR /&gt;print(cursor.fetchone())&lt;BR /&gt;conn.close()&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;REFERENCES&lt;/P&gt;
&lt;P&gt;- Databricks init scripts overview: &lt;A href="https://docs.databricks.com/en/init-scripts/index.html" target="_blank"&gt;https://docs.databricks.com/en/init-scripts/index.html&lt;/A&gt;&lt;BR /&gt;- Cluster-scoped init scripts: &lt;A href="https://docs.databricks.com/en/init-scripts/cluster-scoped.html" target="_blank"&gt;https://docs.databricks.com/en/init-scripts/cluster-scoped.html&lt;/A&gt;&lt;BR /&gt;- Init script logging: &lt;A href="https://docs.databricks.com/en/init-scripts/logs.html" target="_blank"&gt;https://docs.databricks.com/en/init-scripts/logs.html&lt;/A&gt;&lt;BR /&gt;- JDBC connectivity for external databases: &lt;A href="https://docs.databricks.com/en/connect/external-systems/jdbc.html" target="_blank"&gt;https://docs.databricks.com/en/connect/external-systems/jdbc.html&lt;/A&gt;&lt;BR /&gt;- Microsoft ODBC Driver for SQL Server on Linux (includes offline install guidance): &lt;A href="https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server" target="_blank"&gt;https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Hope this gets you unblocked. The EULA acceptance is almost certainly the fix for the hang you are seeing.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;</description>
    <pubDate>Sun, 08 Mar 2026 04:12:11 GMT</pubDate>
    <dc:creator>SteveOstrowski</dc:creator>
    <dc:date>2026-03-08T04:12:11Z</dc:date>
    <item>
      <title>ODBC driver installation - help needed</title>
      <link>https://community.databricks.com/t5/data-engineering/odbc-driver-installation-help-needed/m-p/147788#M52776</link>
      <description>&lt;P&gt;Hello,&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;P&gt;I’m trying to use &lt;STRONG&gt;pyodbc&lt;/STRONG&gt; inside &lt;STRONG&gt;Databricks&lt;/STRONG&gt; to connect to a SQL Server database, but I’m working in a &lt;STRONG&gt;restricted, offline Databricks workspace&lt;/STRONG&gt; (no outbound internet).&lt;/P&gt;&lt;P&gt;What I’ve learned so far:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;Databricks clusters &lt;STRONG&gt;do not include&lt;/STRONG&gt; Microsoft’s ODBC Driver 17 or 18 by default.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;I downloaded the .deb package manually:&lt;/P&gt;&lt;PRE&gt;msodbcsql17_17.10.6.1-1_amd64.deb&lt;/PRE&gt;&lt;P&gt;and uploaded it to:&lt;/P&gt;&lt;PRE&gt;/Workspace/Users/&amp;lt;user&amp;gt;/odbc/&lt;/PRE&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;When I try to install it from a .py script using dpkg -i, it fails because:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Python jobs &lt;STRONG&gt;run as non-root&lt;/STRONG&gt;, so dpkg → “requires superuser privilege”&lt;/LI&gt;&lt;LI&gt;Python jobs run on the &lt;STRONG&gt;driver only&lt;/STRONG&gt;, not on executors&lt;/LI&gt;&lt;LI&gt;Installation would not persist across cluster restarts anyway&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;So my real goal is:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Install ODBC Driver 17 on all cluster nodes, offline, with no internet, and enable pyodbc to connect to SQL Server from Databricks.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;From what I understand, the correct approach is:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Use an &lt;STRONG&gt;init script&lt;/STRONG&gt; that installs the .deb file at cluster startup (runs as root),&lt;/LI&gt;&lt;LI&gt;Possibly install additional dependency .deb packages (libodbc1, unixodbc, odbcinst1debian2, etc.),&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I’m looking for guidance from anyone who has successfully done an &lt;STRONG&gt;offline ODBC driver installation&lt;/STRONG&gt; in Databricks.&lt;/P&gt;&lt;P&gt;Currently I am running this shell as init script:&lt;/P&gt;&lt;P&gt;#!/bin/bash&lt;BR /&gt;# install_msodbcsql17_offline.sh&lt;/P&gt;&lt;P&gt;# Where you uploaded the packages&lt;BR /&gt;PKG_DIR="/Workspace/Users/&amp;lt;me&amp;gt;/odbc"&lt;/P&gt;&lt;P&gt;# Install msodbcsql17 from local .deb&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb" || true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, this script never completes when the cluster is starting (stuck on "Running Init Scripts")&lt;/P&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 09 Feb 2026 21:20:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/odbc-driver-installation-help-needed/m-p/147788#M52776</guid>
      <dc:creator>DylanStout</dc:creator>
      <dc:date>2026-02-09T21:20:17Z</dc:date>
    </item>
    <item>
      <title>Re: ODBC driver installation - help needed</title>
      <link>https://community.databricks.com/t5/data-engineering/odbc-driver-installation-help-needed/m-p/150119#M53251</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/101716"&gt;@DylanStout&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;This is worth walking through carefully. It sounds like you have already done solid research on the constraints. Let me walk you through the most likely reason your init script is hanging and provide a complete working approach for offline ODBC driver installation.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;WHY YOUR INIT SCRIPT IS HANGING&lt;/P&gt;
&lt;P&gt;The most common reason an init script gets stuck on "Running Init Scripts" when installing the Microsoft ODBC driver .deb package is the EULA (End User License Agreement) prompt. When you run dpkg -i on the msodbcsql17 package, it triggers a debconf prompt asking you to accept the Microsoft EULA. Since init scripts run non-interactively, that prompt blocks forever waiting for input, which is exactly the "stuck" behavior you are seeing.&lt;/P&gt;
&lt;P&gt;The "|| true" at the end of your dpkg command prevents it from returning a non-zero exit code (which would fail the cluster), but it does not prevent the interactive prompt from hanging the process.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;THE FIX: ACCEPT THE EULA NON-INTERACTIVELY&lt;/P&gt;
&lt;P&gt;You need to pre-accept the EULA before running dpkg. There are two ways to do this:&lt;/P&gt;
&lt;P&gt;Option A: Use the ACCEPT_EULA environment variable&lt;/P&gt;
&lt;P&gt;export ACCEPT_EULA=Y&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb"&lt;/P&gt;
&lt;P&gt;Option B: Use debconf-set-selections to pre-seed the answer&lt;/P&gt;
&lt;P&gt;echo "msodbcsql17 msodbcsql/ACCEPT_EULA boolean true" | debconf-set-selections&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb"&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;COMPLETE WORKING INIT SCRIPT FOR OFFLINE INSTALLATION&lt;/P&gt;
&lt;P&gt;Here is a complete init script that handles the EULA, dependencies, and error cases. Upload all required .deb files to a Unity Catalog Volume (recommended) or Workspace Files location, then reference them in the script.&lt;/P&gt;
&lt;P&gt;Step 1: Gather the required .deb packages&lt;/P&gt;
&lt;P&gt;The msodbcsql17 package depends on unixodbc (which itself depends on libodbc2, libodbcinst2, and odbcinst). On an Ubuntu-based Databricks Runtime, the unixodbc package and its sub-dependencies may already be present on the image, but in a fully offline environment you should have them available just in case. The packages you need are:&lt;/P&gt;
&lt;P&gt;- msodbcsql17_17.10.6.1-1_amd64.deb (the driver itself)&lt;BR /&gt;- unixodbc_*.deb (if not already installed)&lt;BR /&gt;- libodbc2_*.deb (dependency of unixodbc, if needed)&lt;BR /&gt;- libodbcinst2_*.deb (dependency of unixodbc, if needed)&lt;/P&gt;
&lt;P&gt;You can check which are already present by running "dpkg -l | grep unixodbc" in a notebook on a running cluster to see what is pre-installed.&lt;/P&gt;
&lt;P&gt;Step 2: Upload the packages&lt;/P&gt;
&lt;P&gt;Upload them to a Unity Catalog Volume, for example:&lt;/P&gt;
&lt;P&gt;/Volumes/my_catalog/my_schema/my_volume/odbc/&lt;/P&gt;
&lt;P&gt;Or to Workspace Files:&lt;/P&gt;
&lt;P&gt;/Workspace/Users/&amp;lt;your-user&amp;gt;/odbc/&lt;/P&gt;
&lt;P&gt;Step 3: Create the init script&lt;/P&gt;
&lt;P&gt;#!/bin/bash&lt;BR /&gt;set -e&lt;/P&gt;
&lt;P&gt;# Path where you uploaded the .deb packages&lt;BR /&gt;PKG_DIR="/Volumes/my_catalog/my_schema/my_volume/odbc"&lt;BR /&gt;# If using Workspace Files instead, use:&lt;BR /&gt;# PKG_DIR="/Workspace/Users/&amp;lt;your-user&amp;gt;/odbc"&lt;/P&gt;
&lt;P&gt;# Pre-accept the Microsoft EULA (this prevents the interactive hang)&lt;BR /&gt;export ACCEPT_EULA=Y&lt;/P&gt;
&lt;P&gt;# Install dependencies first if they are not already present&lt;BR /&gt;if ! dpkg -s unixodbc &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then&lt;BR /&gt;echo "Installing unixODBC dependencies..."&lt;BR /&gt;dpkg -i "${PKG_DIR}"/libodbc2_*.deb || true&lt;BR /&gt;dpkg -i "${PKG_DIR}"/libodbcinst2_*.deb || true&lt;BR /&gt;dpkg -i "${PKG_DIR}"/unixodbc_*.deb || true&lt;BR /&gt;fi&lt;/P&gt;
&lt;P&gt;# Install the Microsoft ODBC Driver 17&lt;BR /&gt;echo "Installing msodbcsql17..."&lt;BR /&gt;dpkg -i "${PKG_DIR}/msodbcsql17_17.10.6.1-1_amd64.deb"&lt;/P&gt;
&lt;P&gt;# Verify the installation&lt;BR /&gt;odbcinst -q -d -n "ODBC Driver 17 for SQL Server"&lt;BR /&gt;echo "ODBC Driver 17 installed successfully."&lt;/P&gt;
&lt;P&gt;Step 4: Configure the init script on your cluster&lt;/P&gt;
&lt;P&gt;1. Go to your cluster configuration&lt;BR /&gt;2. Click Advanced Options&lt;BR /&gt;3. Go to the Init Scripts tab&lt;BR /&gt;4. Select your source (Volumes or Workspace) and enter the path to the script&lt;BR /&gt;5. Click Add, then Confirm and Restart&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;IMPORTANT NOTES&lt;/P&gt;
&lt;P&gt;1. Init scripts run as root, so you do not need sudo. The "requires superuser privilege" error you saw was from trying to run dpkg inside a Python notebook cell, which runs as a non-root user. Init scripts do not have this problem.&lt;/P&gt;
&lt;P&gt;2. Init scripts run on ALL nodes (both driver and workers), so pyodbc will be available everywhere.&lt;/P&gt;
&lt;P&gt;3. The installation does not persist across cluster restarts, which is expected. The init script runs every time the cluster starts, so this is handled automatically.&lt;/P&gt;
&lt;P&gt;4. Make sure the path to your packages is accessible from all nodes. Unity Catalog Volumes (recommended for Databricks Runtime 13.3 LTS and above) and Workspace Files are both accessible from all nodes during init script execution.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;ALTERNATIVE: CONSIDER USING JDBC INSTEAD&lt;/P&gt;
&lt;P&gt;If your goal is simply to read data from SQL Server into Databricks DataFrames, you may not need ODBC at all. Databricks has built-in JDBC support that does not require any additional driver installation. The Microsoft SQL Server JDBC driver is included in the Databricks Runtime by default. Here is an example:&lt;/P&gt;
&lt;P&gt;jdbc_url = "jdbc:sqlserver://&amp;lt;server&amp;gt;:&amp;lt;port&amp;gt;;databaseName=&amp;lt;database&amp;gt;"&lt;/P&gt;
&lt;P&gt;df = (spark.read&lt;BR /&gt;.format("jdbc")&lt;BR /&gt;.option("url", jdbc_url)&lt;BR /&gt;.option("dbtable", "&amp;lt;schema.table&amp;gt;")&lt;BR /&gt;.option("user", dbutils.secrets.get(scope="my_scope", key="sql_user"))&lt;BR /&gt;.option("password", dbutils.secrets.get(scope="my_scope", key="sql_password"))&lt;BR /&gt;.load()&lt;BR /&gt;)&lt;/P&gt;
&lt;P&gt;df.display()&lt;/P&gt;
&lt;P&gt;This approach works out of the box with no init scripts, no driver installation, and no internet access required. It also uses Databricks Secrets for secure credential management.&lt;/P&gt;
&lt;P&gt;If you specifically need pyodbc (for example, for executing stored procedures or DDL commands that are not supported via Spark JDBC), then the init script approach above is the way to go.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;DEBUGGING TIPS&lt;/P&gt;
&lt;P&gt;If you still have issues after updating the init script:&lt;/P&gt;
&lt;P&gt;1. Enable cluster log delivery in your cluster configuration so init script logs are persisted.&lt;/P&gt;
&lt;P&gt;2. Check the logs at: &amp;lt;cluster-log-path&amp;gt;/&amp;lt;cluster-id&amp;gt;/init_scripts/&lt;/P&gt;
&lt;P&gt;3. You can also check logs on a running cluster at: /databricks/init_scripts/&lt;/P&gt;
&lt;P&gt;4. To verify the driver is installed correctly from a notebook cell, run:&lt;/P&gt;
&lt;P&gt;import subprocess&lt;BR /&gt;result = subprocess.run(["odbcinst", "-q", "-d"], capture_output=True, text=True)&lt;BR /&gt;print(result.stdout)&lt;/P&gt;
&lt;P&gt;5. Test pyodbc connectivity:&lt;/P&gt;
&lt;P&gt;import pyodbc&lt;BR /&gt;conn = pyodbc.connect(&lt;BR /&gt;"DRIVER={ODBC Driver 17 for SQL Server};"&lt;BR /&gt;"SERVER=&amp;lt;your-server&amp;gt;;"&lt;BR /&gt;"DATABASE=&amp;lt;your-database&amp;gt;;"&lt;BR /&gt;"UID=&amp;lt;username&amp;gt;;"&lt;BR /&gt;"PWD=&amp;lt;password&amp;gt;"&lt;BR /&gt;)&lt;BR /&gt;cursor = conn.cursor()&lt;BR /&gt;cursor.execute("SELECT 1")&lt;BR /&gt;print(cursor.fetchone())&lt;BR /&gt;conn.close()&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;REFERENCES&lt;/P&gt;
&lt;P&gt;- Databricks init scripts overview: &lt;A href="https://docs.databricks.com/en/init-scripts/index.html" target="_blank"&gt;https://docs.databricks.com/en/init-scripts/index.html&lt;/A&gt;&lt;BR /&gt;- Cluster-scoped init scripts: &lt;A href="https://docs.databricks.com/en/init-scripts/cluster-scoped.html" target="_blank"&gt;https://docs.databricks.com/en/init-scripts/cluster-scoped.html&lt;/A&gt;&lt;BR /&gt;- Init script logging: &lt;A href="https://docs.databricks.com/en/init-scripts/logs.html" target="_blank"&gt;https://docs.databricks.com/en/init-scripts/logs.html&lt;/A&gt;&lt;BR /&gt;- JDBC connectivity for external databases: &lt;A href="https://docs.databricks.com/en/connect/external-systems/jdbc.html" target="_blank"&gt;https://docs.databricks.com/en/connect/external-systems/jdbc.html&lt;/A&gt;&lt;BR /&gt;- Microsoft ODBC Driver for SQL Server on Linux (includes offline install guidance): &lt;A href="https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server" target="_blank"&gt;https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Hope this gets you unblocked. The EULA acceptance is almost certainly the fix for the hang you are seeing.&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;</description>
      <pubDate>Sun, 08 Mar 2026 04:12:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/odbc-driver-installation-help-needed/m-p/150119#M53251</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-08T04:12:11Z</dc:date>
    </item>
  </channel>
</rss>

