Notably absent from X during Artemis launch: Elon
2 by boringg | 0 comments on Hacker News.
Hack Nux
Watch the number of websites being hacked today, one by one on a page, increasing in real time.
New Show Hacker News story: Show HN: Mkdnsite – Markdown-native web server for humans (HTML) and agents (md)
Show HN: Mkdnsite – Markdown-native web server for humans (HTML) and agents (md)
2 by nexdrew | 0 comments on Hacker News.
# What? Introducing mkdnsite ("markdown site") - an open source Markdown-native web server that serves HTML to humans and raw Markdown to agents. No build step required. Runs on Bun/Node/Deno, as an OS-specific standalone executable, or as a Docker container. Possibly the easiest way to go from Markdown files to functional website in the new agentic era. Features: - Runtime-only, zero build - Content negotiation means HTML for browsers and Markdown for agents - Supports GitHub-Flavored Markdown rendering - Mermaid diagrams, KaTeX math, embedded Chart.js charts, syntax highlighting all included - Full-text search for humans, MCP tools for agents - Customizable UI theming with auto-support for light/dark modes - Pull Markdown files directly from a GitHub repo See official docs at https://mkdn.site # Why? Back in February, I saw Cloudflare's announcement of "Markdown for Agents" ( https://ift.tt/n97hrq1 ). At the time, I thought "so I'm writing my API docs or blog in Markdown and converting it to HTML for a website, only to have Cloudflare turn it back into Markdown for AI/agent consumption". This seemed odd to me. I'm a Node.js developer but had recently been building projects on Bun because of the "batteries included" features, like cross-compilation of standalone executables (similar to Go), that Node.js lacked natively (yes, I'm aware of Node SEA, but it's messy/complicated and `bun build --compile` is not). Then, when I found `Bun.markdown`, something clicked for me - building a web server that converts Markdown to HTML at runtime should be super easy. And agents actually want Markdown, so why not combine the two ideas? Humans like writing Markdown (well, at least, I do) and agents like reading Markdown (less verbose, easier to grok, fewer tokens). Add to this the fact that we can now use AI to write software, and my side project was born. Is Markdown-to-HTML a new concept? Absolutely not. It's pretty old and well-established. But what I think is new is the ability to do everything at runtime (no build step required) and the built-in support for AI agents. mkdnsite has content negotiation, automated llms.txt, an MCP server, and support for agent headers. # How? I worked with Claude to refine the idea and come up with basic requirements/specs and then had Claude build me a scaffolded project. I started the project on March 7. The following Friday, I configured my first set of OpenClaw agents on my personal machine and set them up to use Slack. From that point on, I spent most evenings and every weekend building mkdnsite and a hosted service (at https://mkdn.io ) by logging ideas as issues in GitHub and talking with my "team lead" agent on Slack to pick up the work and implement features. mkdnsite v1.0.0 was released on March 16. The current version is v1.4.1 released March 28. Almost every line of code was written by AI, either via an autonomous OpenClaw agent or via individual Claude Code sessions. # So what? Just looking for some honest feedback. Is this useful? Is it dumb? Is there another tool that offers the same combination of features (I looked and couldn't find one)? I am not downplaying SSGs at all. I quite like Astro. And I love GitHub Pages. I just think there's room for an easier/simpler solution. Please try it out and let me know what you think. Thanks.
2 by nexdrew | 0 comments on Hacker News.
# What? Introducing mkdnsite ("markdown site") - an open source Markdown-native web server that serves HTML to humans and raw Markdown to agents. No build step required. Runs on Bun/Node/Deno, as an OS-specific standalone executable, or as a Docker container. Possibly the easiest way to go from Markdown files to functional website in the new agentic era. Features: - Runtime-only, zero build - Content negotiation means HTML for browsers and Markdown for agents - Supports GitHub-Flavored Markdown rendering - Mermaid diagrams, KaTeX math, embedded Chart.js charts, syntax highlighting all included - Full-text search for humans, MCP tools for agents - Customizable UI theming with auto-support for light/dark modes - Pull Markdown files directly from a GitHub repo See official docs at https://mkdn.site # Why? Back in February, I saw Cloudflare's announcement of "Markdown for Agents" ( https://ift.tt/n97hrq1 ). At the time, I thought "so I'm writing my API docs or blog in Markdown and converting it to HTML for a website, only to have Cloudflare turn it back into Markdown for AI/agent consumption". This seemed odd to me. I'm a Node.js developer but had recently been building projects on Bun because of the "batteries included" features, like cross-compilation of standalone executables (similar to Go), that Node.js lacked natively (yes, I'm aware of Node SEA, but it's messy/complicated and `bun build --compile` is not). Then, when I found `Bun.markdown`, something clicked for me - building a web server that converts Markdown to HTML at runtime should be super easy. And agents actually want Markdown, so why not combine the two ideas? Humans like writing Markdown (well, at least, I do) and agents like reading Markdown (less verbose, easier to grok, fewer tokens). Add to this the fact that we can now use AI to write software, and my side project was born. Is Markdown-to-HTML a new concept? Absolutely not. It's pretty old and well-established. But what I think is new is the ability to do everything at runtime (no build step required) and the built-in support for AI agents. mkdnsite has content negotiation, automated llms.txt, an MCP server, and support for agent headers. # How? I worked with Claude to refine the idea and come up with basic requirements/specs and then had Claude build me a scaffolded project. I started the project on March 7. The following Friday, I configured my first set of OpenClaw agents on my personal machine and set them up to use Slack. From that point on, I spent most evenings and every weekend building mkdnsite and a hosted service (at https://mkdn.io ) by logging ideas as issues in GitHub and talking with my "team lead" agent on Slack to pick up the work and implement features. mkdnsite v1.0.0 was released on March 16. The current version is v1.4.1 released March 28. Almost every line of code was written by AI, either via an autonomous OpenClaw agent or via individual Claude Code sessions. # So what? Just looking for some honest feedback. Is this useful? Is it dumb? Is there another tool that offers the same combination of features (I looked and couldn't find one)? I am not downplaying SSGs at all. I quite like Astro. And I love GitHub Pages. I just think there's room for an easier/simpler solution. Please try it out and let me know what you think. Thanks.
New Show Hacker News story: Show HN: Dull – Instagram Without Reels, YouTube Without Shorts (iOS)
Show HN: Dull – Instagram Without Reels, YouTube Without Shorts (iOS)
2 by kasparnoor | 0 comments on Hacker News.
I kept deleting and redownloading Instagram because I couldn't stop watching Reels but needed the app for DMs. Tried screen time limits, just overrode them. So I built this. Dull loads Instagram, YouTube, Facebook, and X and filters out short-form content with a mix of CSS and JS injection. MutationObserver handles anything that lazy-loads after the page renders, which is most of the annoying stuff since these platforms love to load content dynamically. The ongoing work is maintaining the filters. Platforms change their DOM all the time, Instagram obfuscates class names, YouTube restructures how Shorts appear in the feed, etc. It's a cat-and-mouse thing that never really ends. Also has grayscale mode, time limits, and usage tracking. Happy to answer questions.
2 by kasparnoor | 0 comments on Hacker News.
I kept deleting and redownloading Instagram because I couldn't stop watching Reels but needed the app for DMs. Tried screen time limits, just overrode them. So I built this. Dull loads Instagram, YouTube, Facebook, and X and filters out short-form content with a mix of CSS and JS injection. MutationObserver handles anything that lazy-loads after the page renders, which is most of the annoying stuff since these platforms love to load content dynamically. The ongoing work is maintaining the filters. Platforms change their DOM all the time, Instagram obfuscates class names, YouTube restructures how Shorts appear in the feed, etc. It's a cat-and-mouse thing that never really ends. Also has grayscale mode, time limits, and usage tracking. Happy to answer questions.
New ask Hacker News story: ReactOS to reverse engineer Linux Kernel A.I. Pull Requests, helping Linux-Libre
ReactOS to reverse engineer Linux Kernel A.I. Pull Requests, helping Linux-Libre
3 by pqlfvn | 0 comments on Hacker News.
Here at ReactOS, we gave up on making a open source OS that doesn’t crash and decided to pivot our strategy. Hearing about the new A.I. thing that managers of old previously called “program generators to replace programmers with specifications”, we decided that this new memory hungry and clock cycle hungry black box computer program, when instructed to make pull requests for the Linux Kernel source code, deserves its pull requests to be reverse engineered. This would also help the Linux-Libre team, which sometimes removes C arrays of hexadecimal characters in their attempts to “de-blob the kernel.”
3 by pqlfvn | 0 comments on Hacker News.
Here at ReactOS, we gave up on making a open source OS that doesn’t crash and decided to pivot our strategy. Hearing about the new A.I. thing that managers of old previously called “program generators to replace programmers with specifications”, we decided that this new memory hungry and clock cycle hungry black box computer program, when instructed to make pull requests for the Linux Kernel source code, deserves its pull requests to be reverse engineered. This would also help the Linux-Libre team, which sometimes removes C arrays of hexadecimal characters in their attempts to “de-blob the kernel.”
New Show Hacker News story: Show HN: Browserbeam – a browser API built for AI agents
Show HN: Browserbeam – a browser API built for AI agents
2 by nyku | 0 comments on Hacker News.
I often use LLMs to automate different workflows, some of which include browsing the web and gathering data. At some point I started noticing a few things that bothered me: the browser interactions were clunky, as if the agent was struggling to "see" and understand the page, and as a result, many tokens were wasted. Same for knowing when the page is actually ready or not. I started digging deeper and at some point I just bluntly asked in the Cursor chat the following question: "I ask you, as an LLM that uses these headless browsers, what do you wish people would build to make your work easier?" And it worked because I expanded the "Thinking" section and I saw: "The user is asking me a really interesting meta-question ..." and after that it just listed top 10 most painful issues related to the agent<->browser interaction. So I started building a browser API that returns what LLMs actually need, not what browsers return. Fast forward a few weeks and here we are. A REST API built specifically to help LLMs interact with real browsers. Instead of reading raw HTML, you get markdown, page map, short refs (e1, e2) for clicking instead of CSS selectors, a stable flag when the page is ready, diffs after each step, the list of all interactive elements (links, buttons, inputs), automatic blocker dismissal and a small extract step that returns structured JSON from a schema you describe. Official SDKs for Python, TypeScript, Ruby. MCP server for Cursor and Claude Desktop. Would appreciate any feedback, especially on the API design.
2 by nyku | 0 comments on Hacker News.
I often use LLMs to automate different workflows, some of which include browsing the web and gathering data. At some point I started noticing a few things that bothered me: the browser interactions were clunky, as if the agent was struggling to "see" and understand the page, and as a result, many tokens were wasted. Same for knowing when the page is actually ready or not. I started digging deeper and at some point I just bluntly asked in the Cursor chat the following question: "I ask you, as an LLM that uses these headless browsers, what do you wish people would build to make your work easier?" And it worked because I expanded the "Thinking" section and I saw: "The user is asking me a really interesting meta-question ..." and after that it just listed top 10 most painful issues related to the agent<->browser interaction. So I started building a browser API that returns what LLMs actually need, not what browsers return. Fast forward a few weeks and here we are. A REST API built specifically to help LLMs interact with real browsers. Instead of reading raw HTML, you get markdown, page map, short refs (e1, e2) for clicking instead of CSS selectors, a stable flag when the page is ready, diffs after each step, the list of all interactive elements (links, buttons, inputs), automatic blocker dismissal and a small extract step that returns structured JSON from a schema you describe. Official SDKs for Python, TypeScript, Ruby. MCP server for Cursor and Claude Desktop. Would appreciate any feedback, especially on the API design.
New Show Hacker News story: Show HN: DeepTable – an API that converts messy Excel files into structured data
Show HN: DeepTable – an API that converts messy Excel files into structured data
6 by francisrafal | 0 comments on Hacker News.
We tried to build an Excel error checker. To achieve that, we needed to actually understand the semantic structure of a spreadsheet first. So we built that, and it turned out to be the harder, more general problem. The core issue: most real-world spreadsheets aren't relational tables. Merged cells, multi-level headers, multiple tables per sheet, totals mixed in with data. You can't just dump them to CSV and call it done. LLMs handle the easy cases but fall apart on complex workbooks at scale. Our approach uses an agent-guided compilation pipeline that produces SQL-ready relational tables with full cell-level provenance. This demo visualizes what we do: https://ift.tt/peWl7AV... We have a handful of early customers but honestly don't know yet whether this is a real market or a niche problem. We're posting this to hear from people who've dealt with arbitrary spreadsheet ingestion. Whether you solved it, gave up, or are still living with the pain. If you want to try it on your own files, email me (see my profile for my email) and I'll give you API access.
6 by francisrafal | 0 comments on Hacker News.
We tried to build an Excel error checker. To achieve that, we needed to actually understand the semantic structure of a spreadsheet first. So we built that, and it turned out to be the harder, more general problem. The core issue: most real-world spreadsheets aren't relational tables. Merged cells, multi-level headers, multiple tables per sheet, totals mixed in with data. You can't just dump them to CSV and call it done. LLMs handle the easy cases but fall apart on complex workbooks at scale. Our approach uses an agent-guided compilation pipeline that produces SQL-ready relational tables with full cell-level provenance. This demo visualizes what we do: https://ift.tt/peWl7AV... We have a handful of early customers but honestly don't know yet whether this is a real market or a niche problem. We're posting this to hear from people who've dealt with arbitrary spreadsheet ingestion. Whether you solved it, gave up, or are still living with the pain. If you want to try it on your own files, email me (see my profile for my email) and I'll give you API access.
New ask Hacker News story: LinkedIn uses 65GB of RAM with 7 tabs opened
LinkedIn uses 65GB of RAM with 7 tabs opened
3 by daniele_dll | 1 comments on Hacker News.
https://ibb.co/605p8bP3 A couple of months ago my machine became totally unusable and wasn't understanding why, however after a quick check I discovered the ram was full and the swap as well. After discovering that chrome ate more than half of my ram I checked out the ram consumption on chrome and I was shocked 65GB it's just insane. (ram bought a couple of years ago)
3 by daniele_dll | 1 comments on Hacker News.
https://ibb.co/605p8bP3 A couple of months ago my machine became totally unusable and wasn't understanding why, however after a quick check I discovered the ram was full and the swap as well. After discovering that chrome ate more than half of my ram I checked out the ram consumption on chrome and I was shocked 65GB it's just insane. (ram bought a couple of years ago)