Coming from MS SQL background, I'm trying to write a query in Spark SQL that simply update a column value of table A (source table) by INNER JOINing a new table B with a filter.
MS SQL query looks like this:
UPDATE T
SET T.OfferAmount = OSE.EndpointEventAmountValue
FROM #TempOffer T
INNER JOIN OfferSeq OSE
ON OSE.OfferId = T.OfferID
AND OSE.OfferEventSequenceId = 1
Since spark sql didn't like the UPDATE command, I've been using MERGE to update records (as @Hubert Dudek suggested in this post) which worked fine for cases without the filter. But when there is an filter like "OSE.OfferEventSequenceId = 1" in the spark sql query below, I'm not sure where I should add the filter (I added a WHERE clause here but not sure if that's the correct syntax):
%sql
MERGE into TempOffer VIEW
USING OfferSeq OSE ON VIEW.OfferId = OSE.OfferID
WHERE OSE.OfferId = 1
WHEN MATCHED THEN UPDATE set VIEW.OfferAmount = OSE.EndpointEventAmountValue;