<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Machine learning accuracy depends on execution plans in Machine Learning</title>
    <link>https://community.databricks.com/t5/machine-learning/machine-learning-accuracy-depends-on-execution-plans/m-p/69164#M3272</link>
    <description>&lt;P&gt;that is weird.&lt;BR /&gt;The regression algorithm should just do a prediction on a dataframe.&amp;nbsp; Such a huge difference in accuracy seems very suspicious.&lt;BR /&gt;I would test the algorithm on a reference dataset, for which you know the accuracy beforehand.&lt;BR /&gt;Perhaps your transform script in the initial notebook interferes with the model itself, but that seems strange.&lt;/P&gt;</description>
    <pubDate>Thu, 16 May 2024 13:19:48 GMT</pubDate>
    <dc:creator>-werners-</dc:creator>
    <dc:date>2024-05-16T13:19:48Z</dc:date>
    <item>
      <title>Machine learning accuracy depends on execution plans</title>
      <link>https://community.databricks.com/t5/machine-learning/machine-learning-accuracy-depends-on-execution-plans/m-p/69158#M3271</link>
      <description>&lt;P&gt;I'm using Databricks for a machine learning project -- a fairly standard text classification problem, where I want to use the description of an item (i.e.&amp;nbsp;&lt;STRONG&gt;AXELTTNING KOLKERAMIK MM&lt;/STRONG&gt;) to predict which of n product categories the item belongs to (&lt;STRONG&gt;'Bushings', 'Adaptors', 'Sealings'&lt;/STRONG&gt;, etc.). My strategy basically involves transforming the text into sparse vectors using tokenization and the TDF-IF algorithm, and then fitting a model using logistic regression.&lt;/P&gt;&lt;P&gt;On my first attempt, I did everything in a single Databricks notebook -- data cleaning, data transformation, splitting into test/train data sets, and model training. Fitting the model takes several minutes (on a dataset with ~4500 lines), but the model predicts really well, with a accuracy of about 75% (good considering the quality of my data).&lt;/P&gt;&lt;P&gt;Now, to clean up my workspace, I split the code into several notebooks -- one for data cleaning, one for data transformation, on for model fitting and evalutation. Each notebook ends with a&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;df.write.&lt;/SPAN&gt;&lt;SPAN&gt;mode&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"overwrite"&lt;/SPAN&gt;&lt;SPAN&gt;).&lt;/SPAN&gt;&lt;SPAN&gt;saveAsTable&lt;/SPAN&gt;&lt;SPAN&gt;('tablename')&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;and the next notebook then begins by reading this table. Otherwise, the code is copied line-by-line from the first, big notebook. Here's where it gets strange: If I run the notebook that just reads transformed, clenased data from a table in the catalog and proceeds with the model training, the training is much faster (less than a minute), but the results are poor (accuracy of ~35%).&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;I can somehow explain the difference in training time by looking at the execution plans for the two datasets: If I have all my work in a single notebook, the execution plan is rather complex, and maybe that messes with the regression algorithm. On the other hand, if I read the data from a table and proceed directly to the model training, the execution plan is very simple. But that does not explain the huge difference in the performance of the model. I've checked at double checked that the data sets are the same in the two scenarios, so the difference is not caused by random seeds when splitting data or anything of that sort.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 16 May 2024 12:19:43 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/machine-learning-accuracy-depends-on-execution-plans/m-p/69158#M3271</guid>
      <dc:creator>ThomasSvane</dc:creator>
      <dc:date>2024-05-16T12:19:43Z</dc:date>
    </item>
    <item>
      <title>Re: Machine learning accuracy depends on execution plans</title>
      <link>https://community.databricks.com/t5/machine-learning/machine-learning-accuracy-depends-on-execution-plans/m-p/69164#M3272</link>
      <description>&lt;P&gt;that is weird.&lt;BR /&gt;The regression algorithm should just do a prediction on a dataframe.&amp;nbsp; Such a huge difference in accuracy seems very suspicious.&lt;BR /&gt;I would test the algorithm on a reference dataset, for which you know the accuracy beforehand.&lt;BR /&gt;Perhaps your transform script in the initial notebook interferes with the model itself, but that seems strange.&lt;/P&gt;</description>
      <pubDate>Thu, 16 May 2024 13:19:48 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/machine-learning-accuracy-depends-on-execution-plans/m-p/69164#M3272</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2024-05-16T13:19:48Z</dc:date>
    </item>
  </channel>
</rss>

