<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Writing PySpark DataFrame onto AWS Glue throwing error in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21056#M14294</link>
    <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;@Kaniz Fatma​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am also facing the same issue while using the `saveAsTable` function of DataFrameWriter. Following is the code snippet: -&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;import org.apache.spark.sql.functions.{col, dayofmonth, month, to_date, year}
import org.apache.spark.sql.types.DataTypes
&amp;nbsp;
val df = some-dataframe-here
val glueTableName = "database-name-here.table-name-here"
val s3Path = "s3a://some/path/here/"
val partitionKeys = Array("some-partition-key-here")
val dataframeWithYearMonthDay = df
                                  .withColumn("year", year(to_date(col("createdAt"))).cast(DataTypes.FloatType))
                                  .withColumn("month", month(to_date(col("createdAt"))).cast(DataTypes.FloatType))
                                  .withColumn("day", dayofmonth(to_date(col("createdAt"))).cast(DataTypes.FloatType))
dataframeWithYearMonthDay.write
.partitionBy(List("year", "month", "day") ++ partitionKeys: _*)
.mode("append")
.format("parquet")
.option("path", s3Path)
.saveAsTable(glueTableName)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PFA the stack trace. Please note that the given s3 location is completely empty and I am trying to create a new table here.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, I am facing this issue with only one table. Not facing this issue with writing to other tables.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please let me know if any other information is needed from my end.&lt;/P&gt;</description>
    <pubDate>Sat, 04 Mar 2023 06:20:11 GMT</pubDate>
    <dc:creator>prabhatika</dc:creator>
    <dc:date>2023-03-04T06:20:11Z</dc:date>
    <item>
      <title>Writing PySpark DataFrame onto AWS Glue throwing error</title>
      <link>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21051#M14289</link>
      <description>&lt;P&gt;I have followed the steps as mentioned in this blog : &lt;A href="https://www.linkedin.com/pulse/aws-glue-data-catalog-metastore-databricks-deepak-rajak/" target="test_blank"&gt;https://www.linkedin.com/pulse/aws-glue-data-catalog-metastore-databricks-deepak-rajak/&lt;/A&gt; &lt;/P&gt;&lt;P&gt;but when trying to &lt;B&gt;saveAsTable(table_name)&lt;/B&gt;, it is giving an error as &lt;/P&gt;&lt;P&gt;&lt;B&gt;IllegalArgumentException: Path must be absolute: &amp;lt;table_name&amp;gt;-__PLACEHOLDER__. &lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Can somebody help me on this please ?&lt;/B&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 10 May 2022 08:12:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21051#M14289</guid>
      <dc:creator>raghub1</dc:creator>
      <dc:date>2022-05-10T08:12:02Z</dc:date>
    </item>
    <item>
      <title>Re: Writing PySpark DataFrame onto AWS Glue throwing error</title>
      <link>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21052#M14290</link>
      <description>&lt;P&gt;Looking at the error message I believe the problem is the Glue database has no location which DBR/Delta needs. You can use&amp;nbsp;alter database datalake-processed set location='...'&amp;nbsp;or set the location directly in Glue console on AWS.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.databricks.com/data/metastores/aws-glue-metastore.html#creating-a-table-in-a-database-with-empty-location" target="test_blank"&gt;https://docs.databricks.com/data/metastores/aws-glue-metastore.html#creating-a-table-in-a-database-with-empty-location&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 12 May 2022 12:55:48 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21052#M14290</guid>
      <dc:creator>Prabakar</dc:creator>
      <dc:date>2022-05-12T12:55:48Z</dc:date>
    </item>
    <item>
      <title>Re: Writing PySpark DataFrame onto AWS Glue throwing error</title>
      <link>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21053#M14291</link>
      <description>&lt;P&gt; Thanks Prabhakar, I used the option of specifying the path as s3, but it wouldn't work. This is the code I used : &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;df_final.write.partitionBy("partition_cols").mode("append").option("path", "s3:// location").saveAsTable("table_name")&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Can you help me out please ?&lt;/P&gt;</description>
      <pubDate>Thu, 12 May 2022 13:32:14 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21053#M14291</guid>
      <dc:creator>raghub1</dc:creator>
      <dc:date>2022-05-12T13:32:14Z</dc:date>
    </item>
    <item>
      <title>Re: Writing PySpark DataFrame onto AWS Glue throwing error</title>
      <link>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21055#M14293</link>
      <description>&lt;P&gt;Hey @Raghu Bharadwaj Tallapragada​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jun 2022 16:10:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21055#M14293</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2022-06-21T16:10:24Z</dc:date>
    </item>
    <item>
      <title>Re: Writing PySpark DataFrame onto AWS Glue throwing error</title>
      <link>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21056#M14294</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;@Kaniz Fatma​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am also facing the same issue while using the `saveAsTable` function of DataFrameWriter. Following is the code snippet: -&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;import org.apache.spark.sql.functions.{col, dayofmonth, month, to_date, year}
import org.apache.spark.sql.types.DataTypes
&amp;nbsp;
val df = some-dataframe-here
val glueTableName = "database-name-here.table-name-here"
val s3Path = "s3a://some/path/here/"
val partitionKeys = Array("some-partition-key-here")
val dataframeWithYearMonthDay = df
                                  .withColumn("year", year(to_date(col("createdAt"))).cast(DataTypes.FloatType))
                                  .withColumn("month", month(to_date(col("createdAt"))).cast(DataTypes.FloatType))
                                  .withColumn("day", dayofmonth(to_date(col("createdAt"))).cast(DataTypes.FloatType))
dataframeWithYearMonthDay.write
.partitionBy(List("year", "month", "day") ++ partitionKeys: _*)
.mode("append")
.format("parquet")
.option("path", s3Path)
.saveAsTable(glueTableName)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PFA the stack trace. Please note that the given s3 location is completely empty and I am trying to create a new table here.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, I am facing this issue with only one table. Not facing this issue with writing to other tables.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please let me know if any other information is needed from my end.&lt;/P&gt;</description>
      <pubDate>Sat, 04 Mar 2023 06:20:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/writing-pyspark-dataframe-onto-aws-glue-throwing-error/m-p/21056#M14294</guid>
      <dc:creator>prabhatika</dc:creator>
      <dc:date>2023-03-04T06:20:11Z</dc:date>
    </item>
  </channel>
</rss>

