<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>dodistyo</title><link>https://dodistyo.github.io/</link><description>This is my space</description><generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>dodipras27@gmail.com (Dodi Prasetyo)</managingEditor><webMaster>dodipras27@gmail.com (Dodi Prasetyo)</webMaster><copyright>This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.</copyright><lastBuildDate>Sun, 29 Mar 2026 16:51:00 +0700</lastBuildDate><atom:link href="https://dodistyo.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>Qwen 3.5: Open Weights Closing the Gap to Proprietary Models</title><link>https://dodistyo.github.io/posts/qwen-3.5-open-weights/</link><pubDate>Sun, 29 Mar 2026 16:51:00 +0700</pubDate><author>Dodi Prasetyo</author><guid>https://dodistyo.github.io/posts/qwen-3.5-open-weights/</guid><description><![CDATA[<figure>
</figure>

<p>The open-weight LLM scene has been moving fast lately — but most of the noise is just bigger parameter counts chasing diminishing returns. What&rsquo;s actually interesting right now isn&rsquo;t about how massive a model can get, but how much capability we&rsquo;re packing into something that runs on consumer hardware.</p>
<p>Enter <strong>Qwen 3.5</strong>, which Alibaba released in February with two variants designed for exactly this moment: the <strong>27B</strong> dense model and <strong>35B-A3B</strong> MoE. These aren&rsquo;t trying to be GPT-5 replacements. They&rsquo;re asking a different question entirely — what if you could run frontier-level reasoning locally without needing an API key or worrying about token costs?</p>]]></description></item><item><title>Running Open Weight Models On A Single Consumer Grade GPUs</title><link>https://dodistyo.github.io/posts/open-model-on-local/</link><pubDate>Sun, 19 Oct 2025 00:00:00 +0700</pubDate><author>Dodi Prasetyo</author><guid>https://dodistyo.github.io/posts/open-model-on-local/</guid><description><![CDATA[<h2 id="why-open-models">Why Open Models?</h2>
<p>For years, the biggest language and vision systems were locked behind corporate APIs — from OpenAI, Antrhopic, Google etc.</p>
<p>Then DeepSeek came in — DeepSeek is one of the pioneers in open model space. a relatively unknown AI research lab from China, released an open source model that quickly become the talk back then. On many metrics that matter — capability, cost, openness ― DeepSeek is opening the way for open weight models in the industry.</p>]]></description></item><item><title>Take Your Dev Setup Anywhere with CDE</title><link>https://dodistyo.github.io/posts/cloud-development-environment/</link><pubDate>Sun, 20 Jul 2025 19:30:00 +0700</pubDate><author>Dodi Prasetyo</author><guid>https://dodistyo.github.io/posts/cloud-development-environment/</guid><description><![CDATA[<p>Do you enjoy setting up a development environment each time you switch devices?</p>
<p>Well, I don&rsquo;t&hellip;</p>
<p>Setting up workspace can be tedious, especially when you got a new device or just simply often switching for devices.</p>]]></description></item><item><title>Platform Engineering And Cyber Security Stories</title><link>https://dodistyo.github.io/posts/platform-and-security/</link><pubDate>Fri, 20 Jun 2025 19:37:00 +0700</pubDate><author>Dodi</author><guid>https://dodistyo.github.io/posts/platform-and-security/</guid><description>&lt;p>Alright, so here are the takes from a guy who&amp;rsquo;s been knee-deep in platform and security. building stuff, breaking stuff, securing it, then unbreaking it again. You know, the usual.&lt;/p></description></item><item><title>Kubernetes Gateway API</title><link>https://dodistyo.github.io/posts/kubernetes-gateway-api/</link><pubDate>Sat, 05 Jul 2025 00:00:00 +0700</pubDate><author>Dodi</author><guid>https://dodistyo.github.io/posts/kubernetes-gateway-api/</guid><description>&lt;p>Do you remember the good old days when you’re trying to grasp on how you can make your applications accessible from outside of the kubernetes cluster?&lt;/p></description></item><item><title>Qwen 3.5: Open Weights Closing the Gap to Proprietary Models</title><link>https://dodistyo.github.io/posts/qwen-3.5-medium-open-weights/</link><pubDate>Sat, 28 Mar 2026 09:00:00 +0700</pubDate><author>Dodi Prasetyo</author><guid>https://dodistyo.github.io/posts/qwen-3.5-medium-open-weights/</guid><description><![CDATA[<p>It&rsquo;s been a while since my last post about open-weight models — and honestly, the pace of improvement has been wild. Every few months, something new drops that makes you question whether proprietary models are still worth the hype.</p>
<p>Case in point: <strong>Devstral Small 2</strong> dropped on Dec 22, 2025 with a solid <strong>68%</strong> SWE-Bench score. Impressive for a ~24B model running on consumer hardware. But then just <strong>57 days later</strong>, on Feb 17, 2026, Alibaba released <strong>Qwen 3.5</strong> — and the open-weight game changed again.</p>]]></description></item></channel></rss>