<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Create csv and upload on azure in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106861#M9601</link>
    <description>&lt;P class=""&gt;&lt;SPAN class=""&gt;Hey!&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;Well, not many details have been provided, but in general, this approach could work for you. Hope it helps!&lt;BR /&gt;&lt;SPAN class=""&gt;df = spark.sql(&lt;/SPAN&gt;"SELECT * FROM stages.benefit"&lt;SPAN class=""&gt;)&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;df.write.mode(&lt;SPAN class=""&gt;"overwrite"&lt;/SPAN&gt;).option(&lt;SPAN class=""&gt;"header"&lt;/SPAN&gt;, &lt;SPAN class=""&gt;"true"&lt;/SPAN&gt;).csv(output_path)&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 23 Jan 2025 23:23:15 GMT</pubDate>
    <dc:creator>Isi</dc:creator>
    <dc:date>2025-01-23T23:23:15Z</dc:date>
    <item>
      <title>Create csv and upload on azure</title>
      <link>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106583#M9600</link>
      <description>&lt;P&gt;Can some write a sql query , which queries a table like select * from stages.benefit , creates a csv and upload on azure&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Jan 2025 06:50:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106583#M9600</guid>
      <dc:creator>subhadeep</dc:creator>
      <dc:date>2025-01-22T06:50:38Z</dc:date>
    </item>
    <item>
      <title>Re: Create csv and upload on azure</title>
      <link>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106861#M9601</link>
      <description>&lt;P class=""&gt;&lt;SPAN class=""&gt;Hey!&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;Well, not many details have been provided, but in general, this approach could work for you. Hope it helps!&lt;BR /&gt;&lt;SPAN class=""&gt;df = spark.sql(&lt;/SPAN&gt;"SELECT * FROM stages.benefit"&lt;SPAN class=""&gt;)&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;df.write.mode(&lt;SPAN class=""&gt;"overwrite"&lt;/SPAN&gt;).option(&lt;SPAN class=""&gt;"header"&lt;/SPAN&gt;, &lt;SPAN class=""&gt;"true"&lt;/SPAN&gt;).csv(output_path)&lt;BR /&gt;&lt;BR /&gt;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jan 2025 23:23:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106861#M9601</guid>
      <dc:creator>Isi</dc:creator>
      <dc:date>2025-01-23T23:23:15Z</dc:date>
    </item>
    <item>
      <title>Re: Create csv and upload on azure</title>
      <link>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106868#M9602</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/145027"&gt;@subhadeep&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;You can achieve this in SQL similarly to how you write a dataframe into a table or blob path. We will create an external table pointing to the blob path or mounted blob path. Note that this table does not support ACID transactions and versioning like a Delta table. However, you can set partitions to manage the data. Instead of a single CSV file, you will see multiple part files created as CSVs based on your data inserts.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;-- Create a table with csv format and pointing to blob location
Create or replace table new_external_table_in_blob (
id int,
...
)
Using CSV
Location '.../path/blob'
;

-- Insert data into table 
Insert into new_external_table_in_blob 
values (1, ...);
&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To obtain a single file, perform a PySpark or Scala DataFrame operation and coalesce to 1 for a single partition of data. You can then write this to a blob path. If your data volume is small, you can convert the Spark DataFrame into a Pandas DataFrame and write it to the blob path as a single CSV file.&lt;/P&gt;&lt;P&gt;Regards,&lt;BR /&gt;Hari Prasad&lt;/P&gt;</description>
      <pubDate>Fri, 24 Jan 2025 03:13:49 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/create-csv-and-upload-on-azure/m-p/106868#M9602</guid>
      <dc:creator>hari-prasad</dc:creator>
      <dc:date>2025-01-24T03:13:49Z</dc:date>
    </item>
  </channel>
</rss>

