I have a SQL query which I am converting into spark sql in azure databricks running in my jupyter notebook. In my SQL query, a column named Type is created on the fly which has value 'Goal' for every row:
SELECT Type='Goal', Value
FROM table
Now, when I am using the same syntax on the spark sql in my azure databricks notebook, it gives me an error:
Error in SQL statement: AnalysisException: cannot resolve '`Type`' given input columns:
How can I convert the same logic in azure databrick spark sql