<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric in Generative AI</title>
    <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156262#M1798</link>
    <description>&lt;P data-unlink="true"&gt;Thanks, &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/229288"&gt;@anmolhhns&lt;/a&gt;&amp;nbsp;I tried those options too. Unfortunately, they seem outdated too. I created a workspace in East-1, but didn't the option for Foundation Model Fine-tuning in the Experiments interface. I tried to run the codes in&amp;nbsp;&lt;A href="https://docs.databricks.com/aws/en/large-language-models/foundation-model-training/" target="_blank"&gt;https://docs.databricks.com/aws/en/large-language-models/foundation-model-training/&lt;/A&gt;. The referenced models are no longer available. Any updated codebook will be very helpful!&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 06 May 2026 13:38:34 GMT</pubDate>
    <dc:creator>jayshan</dc:creator>
    <dc:date>2026-05-06T13:38:34Z</dc:date>
    <item>
      <title>Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databricks?</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/125413#M1021</link>
      <description>&lt;P&gt;Hello Databricks Community,&lt;/P&gt;&lt;P&gt;I am Prakash Hinduja from Geneva, Switzerland (Swiss), currently exploring fine-tuning large language models (LLMs) in Databricks and would appreciate any guidance or suggestions from those with experience in this area.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prakash Hinduja Geneva, Switzerland (Swiss)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 10:01:22 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/125413#M1021</guid>
      <dc:creator>prakashhinduja</dc:creator>
      <dc:date>2025-07-16T10:01:22Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/125417#M1022</link>
      <description>&lt;P&gt;Hello Prakash&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can start it from here:&amp;nbsp;&lt;A href="https://www.databricks.com/blog/fine-tuning-large-language-models-hugging-face-and-deepspeed" target="_blank"&gt;https://www.databricks.com/blog/fine-tuning-large-language-models-hugging-face-and-deepspeed&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Also there is ongoing databricks courses&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://uplimit.com/course/databricks-genai-with-databricks" target="_blank"&gt;https://uplimit.com/course/databricks-genai-with-databricks&lt;/A&gt;&amp;nbsp;you can register here.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 10:45:54 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/125417#M1022</guid>
      <dc:creator>Khaja_Zaffer</dc:creator>
      <dc:date>2025-07-16T10:45:54Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/125450#M1024</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/173383"&gt;@prakashhinduja&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;can you acknowledge the solution provided because you keep on asking the same questions. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 14:26:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/125450#M1024</guid>
      <dc:creator>Khaja_Zaffer</dc:creator>
      <dc:date>2025-07-16T14:26:18Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156206#M1796</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/173840"&gt;@Khaja_Zaffer&lt;/a&gt;&amp;nbsp;Thanks you for providing those links. But the materials on those links are outdated and difficult to replicate. It gives me the impression that LLM fine-tuning is no longer the priority of Databricks Mosaic AI. Any expert can confirm that?&lt;/P&gt;</description>
      <pubDate>Tue, 05 May 2026 22:45:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156206#M1796</guid>
      <dc:creator>jayshan</dc:creator>
      <dc:date>2026-05-05T22:45:44Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156211#M1797</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/217522"&gt;@jayshan&lt;/a&gt;, you're right the material is outdated now. The more current path is Foundation Model Fine-tuning, which is part of &lt;STRONG&gt;Mosaic AI Model Training&lt;/STRONG&gt;. You can run it from the Databricks UI or with the databricks_genai SDK, point it at training data in Unity Catalog, pick a task type like CHAT_COMPLETION, INSTRUCTION_FINETUNE, or CONTINUED_PRETRAIN, and it will register the fine-tuned model back to Unity Catalog for serving. Just note it's still in Public Preview and limited to certain regions. Here is the official doc: &lt;A class="" href="https://docs.databricks.com/aws/en/large-language-models/foundation-model-training/" target="_blank" rel="noopener"&gt;https://docs.databricks.com/aws/en/large-language-models/foundation-model-training/&lt;/A&gt;&lt;BR /&gt;If you need more control than the managed API offers, you can also spin up a GPU cluster on Databricks AI Runtime and use libraries like LoRA, TRL, DeepSpeed, Unsloth, or Axolotl for custom workflows&lt;/P&gt;</description>
      <pubDate>Wed, 06 May 2026 05:48:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156211#M1797</guid>
      <dc:creator>anmolhhns</dc:creator>
      <dc:date>2026-05-06T05:48:20Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156262#M1798</link>
      <description>&lt;P data-unlink="true"&gt;Thanks, &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/229288"&gt;@anmolhhns&lt;/a&gt;&amp;nbsp;I tried those options too. Unfortunately, they seem outdated too. I created a workspace in East-1, but didn't the option for Foundation Model Fine-tuning in the Experiments interface. I tried to run the codes in&amp;nbsp;&lt;A href="https://docs.databricks.com/aws/en/large-language-models/foundation-model-training/" target="_blank"&gt;https://docs.databricks.com/aws/en/large-language-models/foundation-model-training/&lt;/A&gt;. The referenced models are no longer available. Any updated codebook will be very helpful!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 06 May 2026 13:38:34 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156262#M1798</guid>
      <dc:creator>jayshan</dc:creator>
      <dc:date>2026-05-06T13:38:34Z</dc:date>
    </item>
    <item>
      <title>Re: Prakash Hinduja Geneva, Switzerland, How do I fine-tune a large language model (LLM) in Databric</title>
      <link>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156272#M1799</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/217522"&gt;@jayshan&lt;/a&gt;&amp;nbsp;, you're right that it doesn't show up in the Experiments tab. It's accessed through the databricks_genai SDK directly from a notebook. The Experiments tab will only show your run &lt;EM&gt;after&lt;/EM&gt; you launch one via the SDK.&lt;/P&gt;&lt;P&gt;I just ran it in my workspace and it works, here's a quick setup you can try:&lt;/P&gt;&lt;PRE&gt;%pip install databricks_genai
dbutils.library.restartPython()&lt;/PRE&gt;&lt;PRE&gt;from databricks.model_training import foundation_model as fm

# 1. First, check what models are actually available in your region/workspace
for m in fm.get_models():
    print(m.name)&lt;/PRE&gt;&lt;P&gt;In my workspace this returned: Llama 3.1 8B / 8B-Instruct / 70B / 70B-Instruct, Llama 3.2 1B / 1B-Instruct / 3B / 3B-Instruct, and Llama 3.3 70B-Instruct. So meta-llama/Llama-3.2-3B-Instruct from the docs example is still valid, if you got "model not available" earlier, it might be a region issue or the SDK version.&lt;/P&gt;&lt;PRE&gt;# 2. Launch a small training run (point train_data_path to a JSONL in a Unity Catalog Volume)
run = fm.create(
    model="meta-llama/Llama-3.2-3B-Instruct",
    train_data_path="/Volumes/&amp;lt;your_catalog&amp;gt;/&amp;lt;schema&amp;gt;/&amp;lt;volume&amp;gt;/train.jsonl",
    task_type="CHAT_COMPLETION",
    register_to="&amp;lt;your_catalog&amp;gt;.&amp;lt;schema&amp;gt;.&amp;lt;model_name&amp;gt;",
    training_duration="1ep",
    learning_rate="5e-7",
)

print(run.name, run.status)&lt;/PRE&gt;&lt;P&gt;The training data should be a JSONL file with chat-format rows like:&lt;/P&gt;&lt;PRE&gt;{"messages": [{"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}&lt;/PRE&gt;&lt;P&gt;Once you call fm.create(), the run will then show up in the Experiments tab as an MLflow run. If you still hit "model not available" after running get_models(), share the exact error and we can dig into it.&lt;/P&gt;</description>
      <pubDate>Wed, 06 May 2026 14:57:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/generative-ai/prakash-hinduja-geneva-switzerland-how-do-i-fine-tune-a-large/m-p/156272#M1799</guid>
      <dc:creator>anmolhhns</dc:creator>
      <dc:date>2026-05-06T14:57:10Z</dc:date>
    </item>
  </channel>
</rss>

