Show HN: I built an AI twin recruiters can interview
2 by Charlie112 | 3 comments on Hacker News.
https://chengai.me The problem: Hiring new grads is broken. Thousands of identical resumes, but we're all different people. Understanding someone takes time - assessments, phone screens, multiple interviews. Most never get truly seen. I didn't want to be just another PDF. So I built an AI twin that recruiters can actually interview. What you can do: •Interview my AI about anything: https://chengai.me/chat •Paste your JD to see if we match: https://ift.tt/XdNZ4Gq •Explore my projects, code, and writing What happened: Sent it to one recruiter on LinkedIn. Next day, traffic spiked as it spread internally. Got interview invites within 24 hours. The bigger vision: What if this became standard? Instead of resume spam → keyword screening → interview rounds that still miss good fits, let recruiter AI talk to candidate AI for deep discovery. Build a platform where anyone can create their AI twin for genuine matching. I'm seeking Software/AI/ML Engineering roles and can build production-ready solutions from scratch. The site itself proves what I can do. Would love HN's thoughts on both the execution and the vision.
Hack Nux
Watch the number of websites being hacked today, one by one on a page, increasing in real time.
New ask Hacker News story: Ask HN: Cheap laptop for Linux without GUI (for writing)
Ask HN: Cheap laptop for Linux without GUI (for writing)
4 by locusofself | 0 comments on Hacker News.
Hey HN, I'm on a quest for a distraction-free writing device and considering a super cheap laptop which I can just run vim/nano on. I'd like: - Excellent battery life - Good keyboard - Sleep/wake capabilities (why is this so hard with Linux?) I'm thinking some kind of chromebook? Maybe an old thinkpad?
4 by locusofself | 0 comments on Hacker News.
Hey HN, I'm on a quest for a distraction-free writing device and considering a super cheap laptop which I can just run vim/nano on. I'd like: - Excellent battery life - Good keyboard - Sleep/wake capabilities (why is this so hard with Linux?) I'm thinking some kind of chromebook? Maybe an old thinkpad?
New ask Hacker News story: G Lang – A lightweight interpreter written in D (2.4MB)
G Lang – A lightweight interpreter written in D (2.4MB)
13 by pouyathe | 1 comments on Hacker News.
Hi HN, I've been working on a programming language called G. It is designed to be memory-safe and extremely fast, with a focus on a tiny footprint. The entire interpreter is written in D and weighs in at only 2.4MB. I built it because I wanted a modern scripting language that feels lightweight but has the safety of a high-level language. Key Features: Small: The binary is ~2.4MB. Fast: Optimized for x86_64. Safe: Memory-safe execution. Std Lib: Includes std.echo, std.newline, etc. GitHub: https://ift.tt/Fn2v9bH I would love to get some feedback on the syntax or the architecture from the community!
13 by pouyathe | 1 comments on Hacker News.
Hi HN, I've been working on a programming language called G. It is designed to be memory-safe and extremely fast, with a focus on a tiny footprint. The entire interpreter is written in D and weighs in at only 2.4MB. I built it because I wanted a modern scripting language that feels lightweight but has the safety of a high-level language. Key Features: Small: The binary is ~2.4MB. Fast: Optimized for x86_64. Safe: Memory-safe execution. Std Lib: Includes std.echo, std.newline, etc. GitHub: https://ift.tt/Fn2v9bH I would love to get some feedback on the syntax or the architecture from the community!
New Show Hacker News story: Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed
3 by dsifry | 0 comments on Hacker News.
A few weeks ago I posted about GoodToGo https://ift.tt/i0oGzOs - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system. I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks. So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot. The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps. This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics. A few learnings: Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates. The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own. It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes. metaswarm stands on two projects: https://ift.tt/81paUyO by Steve Yegge (git-native task tracking and knowledge priming) and https://ift.tt/D0YQBFR by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential. Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to. $ cd my-project-name $ npx metaswarm init MIT licensed. IANAL. YMMV. Issues/PRs welcome!
3 by dsifry | 0 comments on Hacker News.
A few weeks ago I posted about GoodToGo https://ift.tt/i0oGzOs - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system. I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks. So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot. The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps. This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics. A few learnings: Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates. The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own. It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes. metaswarm stands on two projects: https://ift.tt/81paUyO by Steve Yegge (git-native task tracking and knowledge priming) and https://ift.tt/D0YQBFR by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential. Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to. $ cd my-project-name $ npx metaswarm init MIT licensed. IANAL. YMMV. Issues/PRs welcome!
New ask Hacker News story: Ask HN: What weird or scrappy things did you do to get your first users?
Ask HN: What weird or scrappy things did you do to get your first users?
4 by preston-kwei | 0 comments on Hacker News.
Hi everyone, I’m building Persona, a platform to delegate email scheduling to AI. Lately, I’ve been working hard to get those first users on board, but it’s been quite challenging. I’ve already tried the typical strategies that everybody talks about: cold email, LinkedIn InMail, careful targeting, decent copy. It’s mostly been a dead end. Low open rates, almost no replies. At this point, I’m not looking for the usual advice you see in blog posts or on reddit. I’m specifically curious about unconventional or non-obvious things that actually worked for you early on, especially things that felt a bit scrappy, weird, or counterintuitive at the time. If you’ve been through this phase, what genuinely worked and got you your first users?
4 by preston-kwei | 0 comments on Hacker News.
Hi everyone, I’m building Persona, a platform to delegate email scheduling to AI. Lately, I’ve been working hard to get those first users on board, but it’s been quite challenging. I’ve already tried the typical strategies that everybody talks about: cold email, LinkedIn InMail, careful targeting, decent copy. It’s mostly been a dead end. Low open rates, almost no replies. At this point, I’m not looking for the usual advice you see in blog posts or on reddit. I’m specifically curious about unconventional or non-obvious things that actually worked for you early on, especially things that felt a bit scrappy, weird, or counterintuitive at the time. If you’ve been through this phase, what genuinely worked and got you your first users?