New Show Hacker News story: Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR

Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR
8 by jawiggins | 5 comments on Hacker News.
I think like many of you, I've been jumping between many claude code/codex sessions at a time, managing multiple lines of work and worktrees in multiple repos. I wanted a way to easily manage multiple lines of work and reduce the amount of input I need to give, allowing the agents to remove me as a bottleneck from as much of the process as I can. So I built an orchestration tool for AI coding agents: Optio is an open-source orchestration system that turns tickets into merged pull requests using AI coding agents. You point it at your repos, and it handles the full lifecycle: - Intake — pull tasks from GitHub Issues, Linear, or create them manually - Execution — spin up isolated K8s pods per repo, run Claude Code or Codex in git worktrees - PR monitoring — watch CI checks, review status, and merge readiness every 30s - Self-healing — auto-resume the agent on CI failures, merge conflicts, or reviewer change requests - Completion — squash-merge the PR and close the linked issue The key idea is the feedback loop. Optio doesn't just run an agent and walk away — when CI breaks, it feeds the failure back to the agent. When a reviewer requests changes, the comments become the agent's next prompt. It keeps going until the PR merges or you tell it to stop. Built with Fastify, Next.js, BullMQ, and Drizzle on Postgres. Ships with a Helm chart for production deployment.

New Show Hacker News story: Show HN: GhostDesk – MCP server giving AI agents a full virtual Linux desktop

Show HN: GhostDesk – MCP server giving AI agents a full virtual Linux desktop
2 by maltyxxx | 0 comments on Hacker News.
Most LLMs can reason. They can't use software. GhostDesk gives your agent a full Linux desktop and the motor skills to operate it like a human realistic mouse movement, natural typing, screenshot fallback for CAPTCHAs. It reads UIs semantically and behaves like a real user when sites try to detect bots. Book a flight, scrape a site without selectors, operate legacy software with no API, run QA across an entire app one prompt. If a human can do it on a desktop, your agent can too. Runs in Docker. Spin up multiple instances in parallel, each driven by a sub-agent. No real ceiling. Works with Claude, GPT, Gemini, or any local model (Ollama, LM Studio). MIT.

New Show Hacker News story: Show HN: I took back Video.js after 16 years and we rewrote it to be 88% smaller

Show HN: I took back Video.js after 16 years and we rewrote it to be 88% smaller
29 by Heff | 2 comments on Hacker News.
What do you do when private equity buys your old company and fires the maintainers of the popular open source project you started over a decade ago? You reboot it, and bring along some new friends to do it. Video.js is used by billions of people every month, on sites like Amazon.com, Linkedin, and Dropbox, and yet it wasn’t in great shape. A skeleton crew of maintainers were doing their best with a dated architecture, but it needed more. So Sam from Plyr, Rahim from Vidstack, and Wes and Christain from Media Chrome jumped in to help me rebuild it better, faster, and smaller. It’s in beta now. Please give it a try and tell us what breaks.

New ask Hacker News story: Is Trusttunnel easy for people to use?

Is Trusttunnel easy for people to use?
2 by AnonyMD | 0 comments on Hacker News.
I tried setting up Trusttunnel and thought it worked fine as a VPN. However, I'd like to know what other people think.

New Show Hacker News story: Show HN: OpenCastor Agent Harness Evaluator Leaderboard

Show HN: OpenCastor Agent Harness Evaluator Leaderboard
3 by craigm26 | 0 comments on Hacker News.
I've been building OpenCastor, a runtime layer that sits between a robot's hardware and its AI agent. One thing that surprised me: the order you arrange the skill pipeline (context builder → model router → error handler, etc.) and parameters like thinking_budget and context_budget affect task success rates as much as model choice does. So I built a distributed evaluator. Robots contribute idle compute to benchmark harness configurations against OHB-1, a small benchmark of 30 real-world robot tasks (grip, navigate, respond, etc.) using local LLM calls via Ollama. The search space is 263,424 configs (8 dimensions: model routing, context budget, retry logic, drift detection, etc.). The demo leaderboard shows results so far, broken down by hardware tier (Pi5+Hailo, Jetson, server, budget boards). The current champion config is free to download as a YAML and apply to any robot. P66 safety parameters are stripped on apply — no harness config can touch motor limits or ESTOP logic. Looking for feedback on: (1) whether the benchmark tasks are representative, (2) whether the hardware tier breakdown is useful, and (3) anyone who's run fleet-wide distributed evals of agent configs for robotics or otherwise.

New Show Hacker News story: Show HN: Cq – Stack Overflow for AI coding agents

Show HN: Cq – Stack Overflow for AI coding agents
21 by peteski22 | 4 comments on Hacker News.
Hi all, I'm Peter at Staff Engineer and Mozilla.ai and I want to share our idea for a standard for shared agent learning, conceptually it seemed to fit easily in my mental model as a Stack Overflow for agents. The project is trying to see if we can get agents (any agent, any model) to propose 'knowledge units' (KUs) as a standard schema based on gotchas it runs into during use, and proactively query for existing KUs in order to get insights which it can verify and confirm if they prove useful. It's currently very much a PoC with a more lofty proposal in the repo, we're trying to iterate from local use, up to team level, and ideally eventually have some kind of public commons. At the team level (see our Docker compose example) and your coding agent configured to point to the API address for the team to send KUs there instead - where they can be reviewed by a human in the loop (HITL) via a UI in the browser, before they're allowed to appear in queries by other agents in your team. We're learning a lot even from using it locally on various repos internally, not just in the kind of KUs it generates, but also from a UX perspective on trying to make it easy to get using it and approving KUs in the browser dashboard. There are bigger, complex problems to solve in the future around data privacy, governance etc. but for now we're super focussed on getting something that people can see some value from really quickly in their day-to-day. Tech stack: * Skills - markdown * Local Python MCP server (FastMCP) - managing a local SQLite knowledge store * Optional team API (FastAPI, Docker) for sharing knowledge across an org * Installs as a Claude Code plugin or OpenCode MCP server * Local-first by default; your knowledge stays on your machine unless you opt into team sync by setting the address in config * OSS (Apache 2.0 licensed) Here's an example of something which seemed straight forward, when asking Claude Code to write a GitHub action it often used actions that were multiple major versions out of date because of its training data. In this case I told the agent what I saw when I reviewed the GitHub action YAML file it created and it proposed the knowledge unit to be persisted. Next time in a completely different repo using OpenCode and an OpenAI model, the cq skill was used up front before it started the task and it got the information about the gotcha on major versions in training data and checked GitHub proactively, using the correct, latest major versions. It then confirmed the KU, increasing the confidence score. I guess some folks might say: well there's a CLAUDE.md in your repo, or in ~/.claude/ but we're looking further than that, we want this to be available to all agents, to all models, and maybe more importantly we don't want to stuff AGENTS.md or CLAUDE.md with loads of rules that lead to unpredictable behaviour, this is targetted information on a particular task and seems a lot more useful. Right now it can be installed locally as a plugin for Claude Code and OpenCode: claude plugin marketplace add mozilla-ai/cq claude plugin install cq This allows you to capture data in your local ~/.cq/local.db (the data doesn't get sent anywhere else). We'd love feedback on this, the repo is open and public - so GitHub issues are welcome. We've posted on some of our social media platforms with a link to the blog post (below) so feel free to reply to us if you found it useful, or ran into friction, we want to make this something that's accessible to everyone. Blog post with the full story: https://ift.tt/BpK0C9W GitHub repo: https://ift.tt/xKB2Fjr Thanks again for your time.

New ask Hacker News story: Ask HN: Is using AI tooling for a PhD literature review dishonest?

Ask HN: Is using AI tooling for a PhD literature review dishonest?
4 by latand6 | 1 comments on Hacker News.
I'm a PhD student in structural engineering. My dissertation topic is about using LLM agents in automating FEA calculations on common Ukrainian software that companies use. I'm writing my literature review now and I've vibecoded a personal local dashboard that helps me manage the literature review process. I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON. The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these. I remember doing this all manually when I was doing my master's degree in the UK. That was a very terrible and tedious experience partially because I've ADHD So my question is, is it dishonest? Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct. I really don't quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest.