New Show Hacker News story: Show HN: Interactive WordNet Visualizer-Explore Semantic Relations as a Graph

Show HN: Interactive WordNet Visualizer-Explore Semantic Relations as a Graph
2 by ricky_risky | 0 comments on Hacker News.


New ask Hacker News story: I lost my ability to learn anything new because of AI and I need your opinions

I lost my ability to learn anything new because of AI and I need your opinions
2 by dokdev | 4 comments on Hacker News.
I feel like I’ve lost my ability to learn because of AI. It is now so easy to generate code that it feels meaningless to focus and spend time crafting it myself. I am deeply sad that we may be losing the craftsmanship side of programming; it feels less important to understand the fundamentals when a model can produce something that works in seconds. AI seems to abstract away the fundamentals. One could argue that it was always like this. Low-level languages like C abstracted away assembly and CPU architecture. High-level languages abstracted away low-level languages. Frameworks abstracted away some of the fundamentals. Every generation built new abstractions on top of old ones. But there is a big difference with AI. Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction. I am not saying we cannot use them. I am saying we cannot fully trust them. Yet everyone (or maybe just the bubble I am in) pushes the use of AI. For example, I genuinely want to invest time in learning Rust, but at the same time, I am terrified that all the effort and time I spend learning it will become obsolete in the future. And the reason it might become obsolete may not be because the models are perfect and always produce high-quality code; it might simply be because, as an industry, we will accept “good enough” and stop pushing for high quality. As of now, models can already generate code with good-enough quality. Is it only me, or does it feel like there are half-baked features everywhere now? Every product ships faster, but with rough edges. Recently, I saw Claude Code using 10 GiB of RAM. It is simply a TUI app. Don’t get me wrong, I also use AI a lot. I like that we can try out different things so easily. As a developer, I am confused and overwhelmed, and I want to hear what other developers think.

New Show Hacker News story: Show HN: Demucs music stem separator rewritten in Rust – runs in the browser

Show HN: Demucs music stem separator rewritten in Rust – runs in the browser
5 by nikhilunni | 1 comments on Hacker News.
Hi HN! I reimplemented HTDemucs v4 (Meta's music source separation model) in Rust, using Burn. It splits any song into individual stems — drums, bass, vocals, guitar, piano — with no Python runtime or server involved. Try it now: https://nikhilunni.github.io/demucs-rs/ (needs a WebGPU-capable browser — Chrome/Edge work best) GitHub: https://ift.tt/StQJox2 It runs three ways: - In the browser — the full ML inference pipeline compiles to WASM and runs on your GPU via WebGPU. No uploads, nothing leaves your machine. - Native CLI — Metal on macOS, Vulkan on Linux/Windows. Faster than the browser path. - DAW plugin — VST3/CLAP plugin for macOS with a native SwiftUI UI. Load a track, separate it, drag stems directly into your DAW timeline, or play as a MIDI instrument with solo / faders. The core inference library is built on Burn ( https://burn.dev ), a Rust deep learning framework. The same `demucs-core` crate compiles to both native and `wasm32-unknown-unknown` — the only thing that changes is the GPU backend. Model weights are F16 safetensors hosted on Hugging Face and downloaded / cached automatically on first use on all platforms. Three variants: standard 4-stem (84 MB), 6-stem with guitar/piano (84 MB), and a fine-tuned bag-of-4-models for best quality (333 MB). The existing implementations I found online were mostly wrappers around the original Python implementation, and not very portable -- the model works remarkably well and I wanted to be able to quickly create samples / remixes without leaving the DAW or my browser. Right now the implementation is pretty MacOS heavy, as that's what I'm testing with, but all of the building blocks for other platforms are ready to build on. I want this to grow to be a general utility for music producers, not just "works on my machine." It was a fun first foray into DSP and the state of the art of ML over WASM, with lots of help from Claude!

New ask Hacker News story: Facebook Appears to Be Down

Facebook Appears to Be Down
13 by Molitor5901 | 11 comments on Hacker News.
Tried to log in and checked with others across our corporate footprint and we all get the same message: Account Temporarily Unavailable. Your account is currently unavailable due to a site issue. We expect this to be resolved shortly. Please try again in a few minutes. Can others please confirm? Thank you.

New Show Hacker News story: Show HN: We filed 99 patents for deterministic AI governance(Prior Art vs. RLHF)

Show HN: We filed 99 patents for deterministic AI governance(Prior Art vs. RLHF)
2 by genesalvatore | 0 comments on Hacker News.
For the last few months, we've been working on a fundamental architectural shift in how autonomous agents are governed. The current industry standard relies almost entirely on probabilistic alignment (RLHF, system prompts, constitutional training). It works until it's jailbroken or the context window overflows. A statistical disposition is not a security boundary. We've built an alternative: Deterministic Policy Gates. In our architecture, the LLM is completely stripped of execution power. It can only generate an "intent payload." That payload is passed to a process-isolated, deterministic execution environment where it is evaluated against a cryptographically hashed constraint matrix (the constitution). If it violates the matrix, it is blocked. Every decision is then logged to a Merkle-tree substrate (GitTruth) for an immutable audit trail. We filed 99 provisional patents on this architecture starting January 10, 2026. Crucially, we embedded strict humanitarian use restrictions directly into the patent claims themselves (The Peace Machine Mandate) so the IP cannot legally be used for autonomous weapons, mass surveillance, or exploitation. I wrote a full breakdown of the architecture, why probabilistic safety is a dead end, and the timeline of how we filed this before the industry published their frameworks: Read the full manifesto here: https://ift.tt/oq6YWXn... The full patent registry is public here: https://aos-patents.com I'm the founder and solo inventor. Happy to answer any questions about the deterministic architecture, the Merkle-tree state persistence, or the IP strategy of embedding ethics directly into patent claims.

New Show Hacker News story: Show HN: Open-Source Postman for MCP

Show HN: Open-Source Postman for MCP
2 by baristaGeek | 0 comments on Hacker News.


New Show Hacker News story: Show HN: CrowPay – add x402 in a few lines, let AI agents pay per request

Show HN: CrowPay – add x402 in a few lines, let AI agents pay per request
2 by ssistilli | 0 comments on Hacker News.
Hey HN – I've been building in the agent payments space for a while and the biggest bottleneck I see isn't the protocol (x402 is great) — it's that most API providers have no idea how to actually integrate it. The docs exist, the middleware exists, but going from "I have a REST API" to "agents can discover and pay for my endpoints" still takes way more work than it should. CrowPay fixes that. We integrate x402 payment headers into your existing API and configure USDC settlement on Base. You go from zero to agent-accessible in days, not months. How it works: You have an existing API (Express, Next.js, Cloudflare Workers, any HTTP server) We add x402 payment capability — your endpoints return 402 with payment instructions, agents pay in USDC and get access USDC settles to your wallet on Base. You get a dashboard with real-time analytics on agent payment volume. That's it. You don't have to learn how x402 works under the hood, run blockchain infra, or change your API architecture. Why this matters now: There are over 72,000 AI agents paying for services via x402, with $600M+ in annualized volume across 960+ live endpoints. Stripe just added x402 support. CoinGecko is charging agents $0.01/request. This is going from experiment to real money fast — and most API providers are leaving it on the table because the integration is still too annoying. The agent-side story: We also handle wallet creation and spending budgets for agent builders. If you're building agents that need to pay for things, Crow lets you create a wallet, fund it, set spending limits, and let your agent loose. The agent gets a budget, and you don't wake up to a surprise $10k bill. What I'd love to hear: What's keeping you from adding agent payments today? Is it technical complexity, uncertainty about demand, or something else? Agent builders: how do you handle spending controls? Is "agent gets a wallet with a budget" the right abstraction?