<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Season-3 on GPUburnout | Jun Park</title><link>https://gpuburnout.com/tags/season-3/</link><description>Recent content in Season-3 on GPUburnout | Jun Park</description><generator>Hugo -- 0.155.2</generator><language>en-us</language><lastBuildDate>Sat, 21 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://gpuburnout.com/tags/season-3/index.xml" rel="self" type="application/rss+xml"/><item><title>Nine Experiments, Nine Funerals</title><link>https://gpuburnout.com/posts/s3-ch3-garbage-survived-finetuning/</link><pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate><guid>https://gpuburnout.com/posts/s3-ch3-garbage-survived-finetuning/</guid><description>A controlled experiment on why post-training alignment can&amp;#39;t fix pretraining contamination - and what the data proves.</description></item><item><title>My Model's Vocabulary Came from Stack Overflow at 3am</title><link>https://gpuburnout.com/posts/s3-ch2-garbage-where-it-came-from/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://gpuburnout.com/posts/s3-ch2-garbage-where-it-came-from/</guid><description>In which a forensic investigation into nonsense tokens reveals a problem that no amount of fine-tuning can fix.</description></item><item><title>Teaching the 1B to Talk</title><link>https://gpuburnout.com/posts/s3-ch1-teaching-1b-to-talk/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://gpuburnout.com/posts/s3-ch1-teaching-1b-to-talk/</guid><description>In which I try to make a language model useful, discover something deeply wrong, and realize I&amp;#39;ve been asking the wrong question.</description></item></channel></rss>