<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Ingest data from REST endpoint into Databricks in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155574#M54277</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/216690"&gt;@Ashwin_DSA&lt;/a&gt;&amp;nbsp;&amp;nbsp;I agree with your thought. I’ve been using a similar Python-based solution in Databricks to download a few GBs of data, and it has worked reliably so far.&lt;/P&gt;</description>
    <pubDate>Mon, 27 Apr 2026 14:27:25 GMT</pubDate>
    <dc:creator>rohan22sri</dc:creator>
    <dc:date>2026-04-27T14:27:25Z</dc:date>
    <item>
      <title>Ingest data from REST endpoint into Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155367#M54234</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I'm looking for the best option to retrieve between 1-1.5TB of data per day from a REST API into Databricks.&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;Rodrigo Escamilla&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 20:10:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155367#M54234</guid>
      <dc:creator>RodrigoE</dc:creator>
      <dc:date>2026-04-23T20:10:44Z</dc:date>
    </item>
    <item>
      <title>Re: Ingest data from REST endpoint into Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155373#M54236</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/201196"&gt;@RodrigoE&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;It would be helpful to have additional information to recommend the best options for your scenario.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Who owns the REST API?&lt;/LI&gt;
&lt;LI&gt;Is that in your control?&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Can the source push data to Databricks, or should you pull on a schedule?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If the source can push the data, consider &lt;A style="font-family: inherit; background-color: #ffffff;" href="https://www.databricks.com/blog/announcing-general-availability-zerobus-ingest-part-lakeflow-connect" target="_blank"&gt;Zerobus&lt;/A&gt;&lt;SPAN&gt;. T&lt;/SPAN&gt;&lt;SPAN&gt;his is the cleanest, most scalable Databricks-native pattern if the producer is under your control.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;If you have no control over the source, you can build a custom Python data source wrapping their REST API and run it as a Databricks job/stream. While the pattern will work for your volumes,&amp;nbsp;the bottleneck is usually the API’s own throughput/limits, not Databricks.&lt;/P&gt;
&lt;P&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 21:10:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155373#M54236</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-04-23T21:10:08Z</dc:date>
    </item>
    <item>
      <title>Re: Ingest data from REST endpoint into Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155547#M54273</link>
      <description>&lt;P&gt;Hi Rodrigo,&lt;/P&gt;&lt;P&gt;One simple approach I’ve used is calling the REST API directly from a Databricks notebook using standard Python libraries—no extra setup or tools required.&lt;/P&gt;&lt;P&gt;The idea is to keep it minimal: generate the API signature, call the endpoint, and load the response. Here’s a very simplified example:&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;import time&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;import hashlib&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;import requests&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;# Generate API signature&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;def generate_signature(api_key, secret):&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;raw = api_key + secret + str(int(time.time()))&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;return hashlib.md5(raw.encode()).hexdigest()&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;# Call API&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;def fetch_data():&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;api_key = "&amp;lt;YOUR_API_KEY&amp;gt;"&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;secret = "&amp;lt;YOUR_SECRET&amp;gt;"&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;endpoint = "your-endpoint"&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;sig = generate_signature(api_key, secret)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;url = f"&lt;A href="https://api.example.com/v3/{endpoint}?apiKey={api_key}&amp;amp;sig={sig" target="_blank" rel="noopener"&gt;https://api.example.com/v3/{endpoint}?apiKey={api_key}&amp;amp;sig={sig&lt;/A&gt;}"&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;response = requests.get(url)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;return response.json()&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;# Run&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;data = fetch_data()&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;That’s really all you need to get started. From there, you can store the data in DBFS or a table.&lt;/P&gt;&lt;P&gt;If you need more throughput, you can later add parallel calls or pagination—but for smaller payloads, this works well and is very easy to maintain.&lt;/P&gt;&lt;P&gt;Best regards,&lt;BR /&gt;Rohan&lt;/P&gt;</description>
      <pubDate>Mon, 27 Apr 2026 08:18:32 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155547#M54273</guid>
      <dc:creator>rohan22sri</dc:creator>
      <dc:date>2026-04-27T08:18:32Z</dc:date>
    </item>
    <item>
      <title>Re: Ingest data from REST endpoint into Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155568#M54275</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/116713"&gt;@rohan22sri&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;This pattern is great for initial testing or low-volume pulls, but it won’t scale to the 1-1.5 TB/day &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/201196"&gt;@RodrigoE&lt;/a&gt;&amp;nbsp;is targeting. A few reasons for this..&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A single requests.get loop from one notebook driver will hit API and cluster limits long before you reach TB/day. You need partitioned/paginated reads and fan-out across workers (e.g., via mapInPandas, foreachBatch, or a Python Data Source), not a single-threaded client. At this volume, you must handle rate limits, exponential backoff, and idempotent retries systematically... baking that into a reusable ingestion component, not inline notebook code.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also.. for the daily TB-scale, you can’t keep re-pulling everything. You need a robust cursor strategy (timestamps/IDs), checkpointing, and the ability to replay/backfill safely.&lt;/P&gt;
&lt;P&gt;LAstly, you’ll need scheduled workflows, monitoring (lag, error rate, API quota usage), and alerting. A one-off notebook with requests is hard to industrialise and support.&lt;/P&gt;
&lt;P&gt;That's why, I would recommend that data be written directly into Databricks via Zerobus Ingest, which is designed for high-throughput, push-based ingestion into Delta tables... especially if it is a pull.&amp;nbsp;For a pull model, build a custom Python data source for this REST API and run it as a Databricks job / structured stream, so Spark handles parallelism and retries.&amp;nbsp;We can still use your minimal requests example as a starting point to validate auth and payload shape...but should treat it as a spike, not the production architecture.&lt;/P&gt;
&lt;P&gt;Another thing that I wanted to call out is the use of DBFS. For production ingestion at this scale we wouldn’t land the data in DBFS. DBFS is really a legacy workspace file system and best for scratch / notebooks, not for 1-1.5 TB/day of source data. For long-term pipelines you should consider landing into Unity Catalog volumes and Delta tables, so you get proper governance (row/column ACLs), lineage, discovery, and all the newer features (Lakeflow, Zerobus, Auto Loader, etc.) that don’t integrate with DBFS.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 27 Apr 2026 12:56:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155568#M54275</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-04-27T12:56:11Z</dc:date>
    </item>
    <item>
      <title>Re: Ingest data from REST endpoint into Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155574#M54277</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/216690"&gt;@Ashwin_DSA&lt;/a&gt;&amp;nbsp;&amp;nbsp;I agree with your thought. I’ve been using a similar Python-based solution in Databricks to download a few GBs of data, and it has worked reliably so far.&lt;/P&gt;</description>
      <pubDate>Mon, 27 Apr 2026 14:27:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/ingest-data-from-rest-endpoint-into-databricks/m-p/155574#M54277</guid>
      <dc:creator>rohan22sri</dc:creator>
      <dc:date>2026-04-27T14:27:25Z</dc:date>
    </item>
  </channel>
</rss>

