My coworkers really like AI-powered code review tools and it seems that every time I make a pull request in one of their repos I learn about yet another AI code review SaaS product. Given that there are so many of them, I decided to see how easy it would be to develop my own AI-powered code review bot that targets GitHub repositories. I managed to hack out the core of it in a single afternoon using a model that runs on my desk. I've ended up with a little tool I call reviewbot that takes GitHub pull request information and submits code reviews in response.
reviewbot is powered by a DGX Spark, llama.cpp, and OpenAI's GPT-OSS 120b. The AI model runs on my desk with a machine that pulls less power doing AI inference than my gaming tower pulls running fairly lightweight 3D games. In testing I've found that nearly all runs of reviewbot take less than two minutes, even at a rate of only 60 tokens per second generated by the DGX Spark.
reviewbot is about 350 lines of Go that just feeds pull request information into the context window of the model and provides a few tools for actions like "leave pull request review" and "read contents of file". I'm considering adding other actions like "read messages in thread" or "read contents of issue", but I haven't needed them yet.
To make my life easier, I distribute it as a Docker image that gets run in GitHub Actions whenever a pull review comment includes the magic phrase /reviewbot.
The main reason I made reviewbot is that I couldn't find anything like it that let you specify the combination of:
- Your own AI model name
- Your own AI model provider URL
- Your own AI model provider API token
I'm fairly sure that there are thousands of similar AI-powered tools on the market that I can't find because Google is a broken tool, but this one is mine.
How it works
When reviewbot reviews a pull request, it assembles an AI model prompt like this:
Pull request info:
<pr>
<title>Pull request title</title>
<author>GitHub username of pull request author</author>
<body>
Text body of the pull request
</body>
</pr>
Commits:
<commits>
<commit>
<author>Xe</author>
<message>
chore: minor formatting and cleanup fixes
- Format .mcp.json with prettier
- Minor whitespace cleanup
Assisted-by: GLM 4.7 via Claude Code
Reviewbot-request: yes
Signed-off-by: Xe Iaso <me@xeiaso.net>
</message>
</commit>
</commits>
Files changed:
<files>
<file>
<name>.mcp.json</name>
<status>modified</status>
<patch>
@@ -3,11 +3,8 @@
"python": {
"type": "stdio",
"command": "go",
- "args": [
- "run",
- "./cmd/python-wasm-mcp"
- ],
+ "args": ["run", "./cmd/python-wasm-mcp"],
"env": {}
}
}
-}
\ No newline at end of file
+}
</patch>
</file>
</files>
Agent information:
<agentInfo>
[contents of AGENTS.d in the repository]
</agentInfo>
The AI model can return one of three results:
- Definite approval via the
submit_reviewtool that approves the changes with a summary of the changes made to the code. - Definite rejection via the
submit_reviewtool that rejects the changes with a summary of the reason why they're being rejected. - Comments without approving or rejecting the code.
The core of reviewbot is the "AI agent loop", or a loop that works like this:
- Collect information to feed into the AI model
- Submit information to AI model
- If the AI model runs the
submit_reviewtool, publish the results and exit. - If the AI model runs any other tool, collect the information it's requesting and add it to the list of things to submit to the AI model in the next loop.
- If the AI model just returns text at any point, treat that as a noncommittal comment about the changes.
Don't use reviewbot
reviewbot is a hack that probably works well enough for me. It has a number of limitations including but not limited to:
- It does not work with closed source repositories due to the gitfs library not supporting cloning repositories that require authentication. Could probably fix that with some elbow grease if I'm paid enough to do so.
- A fair number of test invocations had the agent rely on unpopulated fields from the GitHub API, which caused crashes. I am certain that I will only find more such examples and need to issue patches for them.
- reviewbot is like 300 lines of Go hacked up by hand in an afternoon. If you really need something like this, you can likely write one yourself with little effort.
Frequently asked questions
When such an innovation as reviewbot comes to pass, people naturally have questions. In order to give you the best reading experience, I asked my friends, patrons, and loved ones for their questions about reviewbot. Here are some answers that may or may not help:
Does the world really need another AI agent?
Probably not! This is something I made out of curiosity, not something I made for you to actually use. It was a lot easier to make than I expected and is surprisingly useful for how little effort was put into it.
Is there a theme of FAQ questions that you're looking for?
Nope. Pure chaos. Let it all happen in a glorious way.
Where do we go when we die?
How the fuck should I know? I don't even know if chairs exist.
Has anyone ever really been far even as decided to use even go want to do look more like?
At least half as much I have wanted to use go wish for that. It's just common sense, really.
If you have a pile of sand and take away one grain at a time, when does it stop being a pile?
When the wind can blow all the sand away.
How often does it require oatmeal?
Three times daily or the netherbeast will emerge and doom all of society. We don't really want that to happen so we make sure to feed reviewbot its oatmeal.
How many pancakes does it take to shingle a dog house?
At least twelve. Not sure because I ran out of pancakes.
Will this crush my enemies, have them fall at my feet, their horses and goods taken?
Only if you add that functionality in a pull request. reviewbot can do anything as long as its code is extended to do that thing.
Why should I use reviewbot?
Frankly, you shouldn't.
