<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Solution Accelerator Series | Toxicity Detection in Gaming in Announcements</title>
    <link>https://community.databricks.com/t5/announcements/solution-accelerator-series-toxicity-detection-in-gaming/m-p/148044#M597</link>
    <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/202433"&gt;@Om_Jha&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;This is a great example of how applied AI can directly improve user experience, not just optimize metrics. I really like how this accelerator connects real-time NLP, streaming, and ML lifecycle management into a single, practical lakehouse workflow. The focus on moderation teams and community health makes it feel grounded and immediately actionable, especially for gaming platforms dealing with scale and toxicity in live environments.&lt;/P&gt;
&lt;P class="p1"&gt;Nicely done.&lt;/P&gt;
&lt;P class="p1"&gt;Cheers, Lou.&lt;/P&gt;</description>
    <pubDate>Wed, 11 Feb 2026 12:45:34 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2026-02-11T12:45:34Z</dc:date>
    <item>
      <title>Solution Accelerator Series | Toxicity Detection in Gaming</title>
      <link>https://community.databricks.com/t5/announcements/solution-accelerator-series-toxicity-detection-in-gaming/m-p/148028#M596</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Build Healthier Communities With Real-Time AI!&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This Solution Accelerator helps you detect toxic in-game chat in real time so you can protect players, reduce churn and keep your gaming communities engaged and healthy. It shows you how to use a lakehouse plus NLP to ingest and analyze gamer data, flag toxic messages and support your moderation teams with scalable, automated workflows.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key highlights&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Pre-built Databricks notebook with code, sample data and step-by-step guidance to get started quickly&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Real-time detection of toxic comments in in-game chat using multi-label NLP models&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Lakehouse-based architecture to unify chat, gameplay and other gamer data (streams, files, voice and more)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Built-in ML pipeline to train and track toxicity models and a streaming pipeline for real-time inference&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Designed to plug into existing community moderation processes and tools to improve player experience and retention&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;Ready to tackle toxicity in your games? Import the &lt;/SPAN&gt;&lt;A href="https://www.databricks.com/solutions/accelerators/toxicity-detection-for-gaming?itm_source=www&amp;amp;itm_category=solutions&amp;amp;itm_page=accelerators&amp;amp;itm_location=body&amp;amp;itm_component=general-asset-card&amp;amp;itm_offer=toxicity-detection-for-gaming" target="_blank"&gt;&lt;SPAN&gt;Toxicity Detection in Gaming Solution Accelerator&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt; into your Databricks workspace and start building real-time toxicity detection and moderation workflows today.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You can also refer to this &lt;/SPAN&gt;&lt;A href="https://www.databricks.com/blog/2021/06/16/solution-accelerator-toxicity-detection-in-gaming.html" target="_blank"&gt;&lt;SPAN&gt;article&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt; for a complete overview of Toxicity Detection in Gaming&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Feb 2026 10:54:45 GMT</pubDate>
      <guid>https://community.databricks.com/t5/announcements/solution-accelerator-series-toxicity-detection-in-gaming/m-p/148028#M596</guid>
      <dc:creator>Om_Jha</dc:creator>
      <dc:date>2026-02-11T10:54:45Z</dc:date>
    </item>
    <item>
      <title>Re: Solution Accelerator Series | Toxicity Detection in Gaming</title>
      <link>https://community.databricks.com/t5/announcements/solution-accelerator-series-toxicity-detection-in-gaming/m-p/148044#M597</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/202433"&gt;@Om_Jha&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;This is a great example of how applied AI can directly improve user experience, not just optimize metrics. I really like how this accelerator connects real-time NLP, streaming, and ML lifecycle management into a single, practical lakehouse workflow. The focus on moderation teams and community health makes it feel grounded and immediately actionable, especially for gaming platforms dealing with scale and toxicity in live environments.&lt;/P&gt;
&lt;P class="p1"&gt;Nicely done.&lt;/P&gt;
&lt;P class="p1"&gt;Cheers, Lou.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Feb 2026 12:45:34 GMT</pubDate>
      <guid>https://community.databricks.com/t5/announcements/solution-accelerator-series-toxicity-detection-in-gaming/m-p/148044#M597</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2026-02-11T12:45:34Z</dc:date>
    </item>
  </channel>
</rss>

