New Show Hacker News story: Show HN: Tmux-palette – Raycast-style command palette for tmux

Show HN: Tmux-palette – Raycast-style command palette for tmux
2 by eduwass | 3 comments on Hacker News.


New Show Hacker News story: Show HN: I spent $100 in Claude tokens and 1k battles training my AI tank

Show HN: I spent $100 in Claude tokens and 1k battles training my AI tank
2 by mazzystar | 0 comments on Hacker News.
Hi HN, I built AgenTank. It is a small game where an AI agent writes the logic for your tank. You watch it fight, give strategic feedback, let the agent update the tank code, and send it back into battle. I have run 1,000+ battles on my own tank and spent about $200 in Claude credits improving it. The part I enjoy most is not just winning, but watching the tank make visible mistakes, thinking of a better strategy, and seeing whether Claude can turn that into better code.

New Show Hacker News story: Show HN: Duckflix, an open-source self-hosted media streaming platform

Show HN: Duckflix, an open-source self-hosted media streaming platform
2 by patakxd | 0 comments on Hacker News.
I’ve been working on Duckflix, a self-hosted media streaming platform. It started as a full-stack project to combine a clean streaming UI with a Bun/Elysia backend, FFmpeg processing, SQLite, Docker deployment, and addon support. Website: https://duckflix.fun Demo: https://demo.duckflix.fun GitHub: https://ift.tt/my9pWNj

New Show Hacker News story: Show HN: GIF Pile. a site to make piles of GIFs

Show HN: GIF Pile. a site to make piles of GIFs
2 by FatCat1979 | 0 comments on Hacker News.
I'm quite fond of obnoxious looking gifs in a post-ironic way as a manner of shitposting and or injecting humor into a chat. The issue with this however is that, for no real good reason at all, the simple usecase of "Have image/gif background, bombard with garbage" had no real good tooling. There's gif editors out there, EZgif my beloved is probably my most used non-search-indexing-slash-social-media-site, but they're kinda clunky for my specific usecase of making digital eye-sandpaper bombastic garbage. Other options are bleak and gave me the mark of the beast via shitty watermarks. I just wanted a pile of gifs on top of each other, and thus far the "easiest" way was to bust open a video editor, muck around with it, mess up exporting as a gif directly, get mad, export it as a 4 second mp4, and then use ffmpeg to get it working. is this probably moronic? yes. am I likely to have missed a decent tool? yes. Did I give up looking after sending 4 dollars to some Indian guy for "No watermarks ever for 4$", only for that "ever" to be a year, and then the clunky weird af login process not work? absolutely. (Fuck you, you know who you are) This took me a few hours (most of which was dealing with the fact I don't do webshit normally and the clunk that one would expect from that), and is a minimal site for my personal minimal usecase. It's static because I'm not going to deal w/ hosting other people's shit and I don't want to deal with that can of worms. all processing is done locally on your browser. Yes, this means that using a 4k image as a base layer for your gif pile will make it take an age. It'll work eventually though. This will never have a watermark unless I'm bought out (total investment thus far has been 14 bucks, 4 of which was that one dude fucking me), in which case I probably earned it. at most I'll likely throw adsense on there at some point to scrape a few cents from the people who can't figure out adblock if it gets popular enough for me to warrant it. There's no timelines or anything like that. literally just a pile of gifs. thus far my primary usecase has been overlaying text gifs from the various fancy text generator sites onto glitter backgrounds with uncomfortable rat GIFs to call people poor on the internet. this makes me happy. There's likely to be obvious UI, UX, or other U-whatever fuckups. If you point them out and I deem it pedantic I'll probably laugh at you. if it's helpful I'll probably implement it when I get a bit. Surprisingly, works on mobile. CSS is exceedingly generic and souless atm, just went off vauge memories of ss13's TGUI. I'll likely scrap the CSS entirely and go full neocities at some point because that's more soulful.

New Show Hacker News story: Show HN: OpenGravity – A zero-install, BYOK vanilla JS clone of Antigravity

Show HN: OpenGravity – A zero-install, BYOK vanilla JS clone of Antigravity
9 by ab613 | 5 comments on Hacker News.
Hi. I’m a high school student studying for my GCSEs. I was using Google Antigravity heavily for my side projects, but I kept hitting the usage limits, and getting random "agent terminated" errors. So I decided to try build my own version of the IDE. I love the UI, so I copied it as accurately as possible, and then hooked up some logic into it, including the INCREDIBLY finicky webcontainer api. I tried to keep it super lightweight, no build steps, or dependencies, and now that its open source, I'm hoping people can build things on top of it that arent possible with closed source tools, like complex custom agent workflows. Some screenshots: - https://ift.tt/tp5dRa7... - https://ift.tt/n1DEelt... What it's made from: - Pure Vanilla JS: no react, vue, or build step. Built entirely in plain HTML/CSS/JS to keep it super lightweight. - WebContainer API and xterm.js: Instead of faking a terminal, I (after much pain) hooked up the WebContainer API so the AI agent has a real, in browser linux environment to run shell commands, install dependencies, and edit local files. - BYOK (Bring Your Own Key): API key ALWAYS stays in localStorage. Whats currently happening: - It works, but it's an alpha. The AI can proactively start projects going properly and edit files, but because I built this over a few days before my exams, a lot of the UI dropdowns and buttons are currently just hardcoded placeholders. - I’m open sourcing it early because I think the foundation of a Vanilla JS + WebContainer IDE is really strong, and I'd love to see where the community takes it while I'm doing my exams. - Live demo: https://opengravity.pages.dev (Zoom out to 80% if not full screen. It will prompt for a gemini api key on load). Start by uploading a folder, then you can fiddle with the terminal and agent, and see how it goes! Would love to hear feedback on the code, the WebContainer integration, or how to improve the agent loop!

New ask Hacker News story: Ask HN: Which LLM are you using to evaluate your ideas?

Ask HN: Which LLM are you using to evaluate your ideas?
4 by Marius77 | 2 comments on Hacker News.
Question as in the title. Curious about your experience and which LLM helped you out the most without saying yes to everything..

New Show Hacker News story: Show HN: I trained a chess engine to play like humans

Show HN: I trained a chess engine to play like humans
2 by hazard | 0 comments on Hacker News.
I built 1e4.ai - a chess web app where you play against neural networks trained to mimic human Lichess players at specific Elo ranges. There's a separate model for each 100-point rating bucket from ~800 to 2200+, and the bots not only choose human-like moves but also burn clock time, play worse under time pressure, and blunder in human-like ways. Live demo: https://1e4.ai Code: https://ift.tt/kPMWrNL A few things that might be interesting: - Trained on almost a full year of Lichess blitz games, around 1B total games - Architecture is an a small (~9MM parameters) transformer-based network that takes the board, recent move history, the player's rating, and remaining clock time as input. Three separate models per rating bucket: move, clock-usage, and win probability. The clock model is what makes the bots feel humanish under time pressure rather than instant. Because the move model takes the clock as one input parameter, it also learns to blunder under time pressure like a human might. - Because the network is so tiny, no GPU is needed for inference - it runs easily on a local CPU - Downside of the tiny network is that it's a bit weak as you turn up the rating past around 1700. It can spot short tactics but not long multi-move combinations. - Initial training on a rented 8xH100 cluster, then fine-tunes on my local GPU for different rating ranges - Inspired by Maia-2 and DeepMind's "Grandmaster-Level Chess Without Search". On a held-out Lichess blitz benchmark, the it beats Maia-2 blitz on top-1 move prediction (56.7% vs 52.7%) and pretty substantially on win-probability calibration (Brier 0.176 vs 0.272). Numbers and code in https://ift.tt/V86btzh... - The data pipeline is C++ via nanobind, then training with Pytorch. Getting this right was actually the thing I spent the most time on. Pre-shuffling the dataset and then being able to read the shuffled dataset sequentially at training time kept the GPU utilization high. Without this it spent a huge percentage of time on I/O while the GPU sat idle. Happy to answer questions about the rating-conditioning, the clock model, or the data pipeline.