Super Rune

Rust, Copilot, and the Power of Small Models

Rust, Copilot, and the Power of Small Models

I have a few hobby projects that I tinker with in my spare time. One of them is a Rust-based RSS/Atom feed reader. It scans feeds, fetches content, and generates static HTML pages to create a simple website. It’s not revolutionary, but it’s a project I enjoy working on, and it’s given me a chance to learn and experiment with Rust.

Introduction

Recently, I decided to improve the project’s code quality and add some new features. I wanted to make it easier to add new feeds, improve error handling, and clean up some of the messy code I’d written when I was less experienced. To help with this, I turned to GitHub Copilot’s free tier, which uses a smaller model like OpenAI GPT-5 Mini and Claude Haiku 4.5.

Project Rust Feed Reader

The project itself is fairly simple. It takes RSS or Atom feed URLs as input and generates static HTML pages for a basic website. The codebase, however, was a bit of a mess. I had used a lot of unwrap() calls, which can cause panics if something goes wrong, and the error handling was minimal. It was the kind of code you write when you’re still learning and just want something to work, without worrying too much about robustness.

One of the main improvements I wanted to make was to streamline the process of adding new feeds. Ideally, I wanted to be able to add a new feed from the command line by simply providing the feed URL and some tags. The tool would then detect the feed type, fetch the title automatically, and add the feed to the reader. If the title couldn’t be fetched, the feed wouldn’t be added— because, let’s face it, a feed without a title isn’t very useful.

Copilot Free Mode

Since I don’t have a paid license for GitHub Copilot, I’m limited to the free tier, which means working with smaller models like GPT-5 Mini. I was curious to see how well it would perform, so I set up an agent to help me with the changes.

Setting Up the Agent

First, I created an agent file to describe what I wanted the agent to do. This file outlined the tasks and goals, such as adding new feeds and improving error handling. I also created a skills file to guide the agent on how to implement these changes. The skills file included specific instructions and examples to help the agent understand what I was looking for.

I used plan mode to break down the tasks. For planning, I used Haiku, and for implementation, I used GPT-5 Mini. The idea was to create a clear roadmap for the agent to follow, which would help it stay on track and make fewer mistakes.

Adding a New Feed

The first task was to implement the “add new feed” feature. The agent did a decent job, but there were a few hiccups along the way. For example, there was a runtime error that the agent didn’t catch initially. Additionally, the agent initially added feeds even if the title couldn’t be fetched. I had it fixing this by requiring that a title be present before adding a feed.

Cleaning Up the Code

Next, I ran cargo clippy to identify all the unwrap() calls in the codebase. My goal was to replace these with proper error handling to prevent panics and make the code more robust. The agent started by tackling the smaller files, replacing unwrap() calls with functions that return a Result. This way, errors are propagated gracefully, and the program can handle failures without crashing.

The agent then moved on to a larger file that had many unwrap() calls—likely one of the first files I wrote when I was still learning Rust. It replaced these calls with proper error handling, ensuring that every function now returns a Result.

To verify the changes, I used git diff to review what the agent had done. If the code compiled, I knew it was on the right track. I also ran cargo clippy again to catch any remaining issues, and the agent fixed those as well.

Small Models Can Have Big Impact

The Good

One of the biggest surprises was how effective GPT-5 Mini was for my project. It’s not as fast or sophisticated as larger models like Claude Opus 4.6, but it got the job done. For a small codebase like mine, a smaller model was more than enough to make meaningful improvements.

The free tier was more than sufficient for my needs. I didn’t need to pay for a subscription, and the token limitations weren’t a problem since I could always wait if I ran out.

The Annoyance

Of course, there were some downsides. The model is slow, so if you’re in a hurry, this might not be the best tool for you. Additionally, the “thinking out loud” process isn’t as polished as with larger models. The agent sometimes made decisions that didn’t entirely make sense, but with a bit of guidance, it was able to correct course.

Key Takeaways

As a Hobbyists

If you’re working on a small project, don’t underestimate the power of free tools and small models. They might not be as flashy or powerful as their larger counterparts, but they can still make a big difference. A well-structured agent and skills file can guide the model effectively, even if it’s not the most advanced AI out there.

Plan mode is also incredibly useful. It allows you to set the direction and let the model work without constant supervision, which is great if you’re juggling multiple tasks or just want to let the agent do its thing.

Final Thoughts

If you’re a hobbyist working on a small project, don’t be afraid to give free tools and small models a try.

Tags

Category

Written by human, not by AI

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.