Hi everyoneiam trying to fetch the metadata of every columns from an table and every tables from the database under an catalogue for that iam trying to use the samples catalogue that provided by databricks and get details for tpch database that provi...
Hi alliam working on a data containing JSON fields with embedded commas into CSV format. iam facing challenges due to the commas within the JSON being misinterpreted as column delimiters during the conversion process.i tried several methods to modify...
when i try to create an dataframe like this lstOfRange = list()
lstOfRange = [ ['CREDIT_LIMIT_RANGE',Decimal(10000000.010000),Decimal(100000000000000000000000.000000),'>10,000,000','G'] ]
RangeSchema = StructType([StructField("rangeType",St...
we are migrating out project from on-premise to azure , so on-premise database is the SQL server that we are using and azure data lake gen2 is the storage location where store data currently and so far we are currently validating record count of each...
sounds an cool option but here we are leveraging Azure Data Lake Storage as an medium of storage and we directly write the data into the preferred location within ADLS so thats where things gets complicated , any idea?
Hi , i would suggest to approach as suggested by Thomaz Rossito,but maybe you can give it as an try like swapping the struct field order like this followingschema = StructType([StructField('DA_RATE', DateType(), True),StructField('CURNCY_F', StringTy...
OHH ,thankyou for your time, that was working well for but that was extracting the json data from the json column which is ok but our real issue is when we try to write the dataframe into an csv we get values from AdditionalRequestParameters column t...
how much i ever tried doing that, im using my local system for checking and there are no firewalls or any security blocks this is the error message that i keep recieving and unable to fix it : com.microsoft.sqlserver.jdbc.SQLServerException: The TCP...