The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: We want to displace Notion with collaborative Markdown files

Hi HN! We at Moment[1] are working on Notion alternative which is (1) rich and collaborative, but (2) also just plain-old Markdown files, stored in git (ok, technically in jj), on local disk. We think the era of rigid SaaS UI is, basically, over: coding agents (`claude`, `amp`, `copilot`, `opencode`, <i>etc</i>.) are good enough now that they instantly build custom UI that fits your needs exactly. The very best agents in the world are coding agents, and we want to allow people to simply use them, <i>e.g.</i>, to build little internal tools—but without compromising on collaboration.<p>Moment aims to cover this and other gaps: seamless collaborative editing for teams, more robust programming capabilities built in (including a from-scratch React integration), and tools for accessing private APIs.<p>A lot of our challenge is just in making the collaborative editing work really well. We have found this is a lot harder than simply slapping Yjs on the frontend and calling it a day. We wrote about this previously and the post[2] did pretty well on HN: Lies I was Told About Collaborative editing (352 upvotes as of this writing). Beyond that, in part 2, we'll talk about the reasons we found it hard to get collab to run at 60fps consistently—for one, the Yjs ProseMirror bindings completely tear down and re-create the entire document on every single collaborative keystroke.<p>We hope you will try it out! At this stage even negative feedback is helpful. :)<p>[1]: <a href="https://www.moment.dev/" rel="nofollow">https://www.moment.dev/</a><p>[2]: <a href="https://news.ycombinator.com/item?id=42343953">https://news.ycombinator.com/item?id=42343953</a>

Show HN: Effective Git

As many of us shift from being software engineers to software managers, tracking changes the right way is growing more important.<p>It’s time to truly understand and master Git.

Show HN: Explain Curl Commands

Show HN: I made a zero-copy coroutine tracer to find my scheduler's lost wakeups

Show HN: Open-Source Article 12 Logging Infrastructure for the EU AI Act

EU legislation (which affects UK and US companies in many cases) requires being able to truly reconstruct agentic events.<p>I've worked in a number of regulated industries off & on for years, and recently hit this gap.<p>We already had strong observability, but if someone asked me to prove exactly what happened for a specific AI decision X months ago (and demonstrate that the log trail had not been altered), I could not.<p>The EU AI Act has already entered force, and its Article 12 kicks-in in August this year, requiring automatic event recording and six-month retention for high-risk systems, which many legal commentators have suggested reads more like an append-only ledger requirement than standard application logging.<p>With this in mind, we built a small free, open-source TypeScript library for Node apps using the Vercel AI SDK that captures inference as an append-only log.<p>It wraps the model in middleware, automatically logs every inference call to structured JSONL in your own S3 bucket, chains entries with SHA-256 hashes for tamper detection, enforces a 180-day retention floor, and provides a CLI to reconstruct a decision and verify integrity. There is also a coverage command that flags likely gaps (in practice omissions are a bigger risk than edits).<p>The library is deliberately simple: TS, targeting Vercel AI SDK middleware, S3 or local fs, linear hash chaining. It also works with Mastra (agentic framework), and I am happy to expand its integrations via PRs.<p>Blog post with link to repo: <a href="https://systima.ai/blog/open-source-article-12-audit-logging" rel="nofollow">https://systima.ai/blog/open-source-article-12-audit-logging</a><p>I'd value feedback, thoughts, and any critique.

Show HN: Rust compiler in PHP emitting x86-64 executables

Show HN: P0 – Yes, AI can ship complex features into real codebases

Show HN: Stacked Game of Life

<a href="https://github.com/vnglst/stacked-game-of-life" rel="nofollow">https://github.com/vnglst/stacked-game-of-life</a>

Show HN: Web Audio Studio – A Visual Debugger for Web Audio API Graphs

Hi HN,<p>I’ve been working on a browser-based tool for exploring and debugging Web Audio API graphs.<p>Web Audio Studio lets you write real Web Audio API code, run it, and see the runtime graph it produces as an interactive visual representation. Instead of mentally tracking connect() calls, you can inspect the actual structure of the graph, follow signal flow, and tweak parameters while the audio is playing.<p>It includes built-in visualizations for common node types — waveforms, filter responses, analyser time and frequency views, compressor transfer curves, waveshaper distortion, spatial positioning, delay timing, and more — so you can better understand what each part of the graph is doing. You can also insert an AnalyserNode between any two nodes to inspect the signal at that exact point in the chain.<p>There are around 20 templates (basic oscillator setups, FM/AM synthesis, convolution reverb, IIR filters, spatial audio, etc.), so you can start from working examples and modify them instead of building everything from scratch.<p>Everything runs fully locally in the browser — no signup, no backend.<p>The motivation came from working with non-trivial Web Audio graphs and finding it increasingly difficult to reason about structure and signal flow once things grow beyond simple examples. Most tutorials show small snippets, but real projects quickly become harder to inspect. I wanted something that stays close to the native Web Audio API while making the runtime graph visible and inspectable.<p>This is an early alpha and desktop-only for now.<p>I’d really appreciate feedback — especially from people who have used Web Audio API in production or built audio tools. You can leave comments here, or use the feedback button inside the app.<p><a href="https://webaudio.studio" rel="nofollow">https://webaudio.studio</a>

Show HN: Pianoterm – Run shell commands from your Piano. A Linux CLI tool

A little weekend project, made so I can pause/play/rewind directly on the piano, when learning a song by ear.

Show HN: Vibe Code your 3D Models

Hi HN,<p>I’m the creator of SynapsCAD, an open-source desktop application I've been building that combines an OpenSCAD code editor, a real-time 3D viewport, and an AI assistant.<p>You can write OpenSCAD code, compile it directly to a 3D mesh, and use an LLM (OpenAI, Claude, Gemini, ...) to modify the code through natural language.<p>Demo video: <a href="https://www.youtube.com/watch?v=cN8a5UozS5Q" rel="nofollow">https://www.youtube.com/watch?v=cN8a5UozS5Q</a><p>A bit about the architecture:<p>- It’s built entirely in Rust.<p>- The UI and 3D viewport are powered by Bevy 0.15 and egui.<p>- It uses a pure-Rust compilation pipeline (openscad-rs for parsing and csgrs for constructive solid geometry rendering) so there are no external tools or WASM required.<p>- Async AI network calls are handled by Tokio in the background to keep the Bevy render loop smooth.<p>Disclaimer: This is a very early prototype. The OpenSCAD parser/compiler doesn't support everything perfectly yet, so you will definitely hit some rough edges if you throw complex scripts at it.<p>I mostly just want to get this into the hands of people who tinker with CAD or Rust.<p>I'd be super happy for any feedback, architectural critiques, or bug reports—especially if you can drop specific OpenSCAD snippets that break the compiler in the GitHub issues!<p>GitHub (Downloads for Win/Mac/Linux): <a href="https://github.com/ierror/synaps-cad" rel="nofollow">https://github.com/ierror/synaps-cad</a><p>Happy to answer any questions about the tech stack or the roadmap!

Show HN: I built a zero-browser, pure-JS typesetting engine for bit-perfect PDFs

Hi HN, I'm a film director by trade, and I prefer writing my stories in plain text rather than using clunky screenplay software. Standard markup like Fountain doesn't work for me because I write in mixed languages, so I use Markdown with a custom syntax I invented to resemble standard screenplay structures.<p>This workflow is great until I need to actually generate an industry-standard screenplay PDF. I got tired of manually copying and pasting my text back into the clunky software just to export it, so I decided to write a script to automate the process. That's when I hit a wall.<p>I tried using React-pdf and other high-level libraries, but they failed me on two fronts: true multilingual text shaping, and complex contextual pagination. Specifically, the strict screenplay requirement to automatically inject (MORE) at the bottom of a page and (CONT'D) at the top of the next page when a character's dialogue is split across a page break.<p>You can't really do that elegantly when the layout engine is a black box. So, I bypassed them and built my own typesetting engine from scratch.<p>VMPrint is a deterministic, zero-browser layout VM written in pure TypeScript. It abandons the DOM entirely. It loads OpenType fonts, runs grapheme-accurate text segmentation (Intl.Segmenter), calculates interval-arithmetic spatial boundaries for text wrapping, and outputs a flat array of absolute coordinates.<p>Some stats:<p>Zero dependencies on Node.js APIs or the DOM (runs in Cloudflare Workers, Lambda, browser).<p>88 KiB core packed.<p>Performance: On a Snapdragon Elite ARM chip, the engine's "God Fixture" (8 pages of mixed CJK, Arabic RTL, drop caps, and multi-page spanning tables) completes layout and rendering in ~28ms.<p>The repo also includes draft2final, the CLI tool I built to convert Markdown into publication-grade PDFs (including the screenplay flavor) using this engine.<p>This is my first open-source launch. The manuscript is still waiting, but the engine shipped instead. I’d love to hear your thoughts, answer any questions about the math or the architecture, and see if anyone else finds this useful!<p>--- A note on AI usage: To be fully transparent about how this was built, I engineered the core concept (an all-flat, morphable box-based system inspired by game engines, applied to page layouts), the interval-arithmetic math, the grapheme segmentation, and the layout logic entirely by hand. I did use AI as a coding assistant at the functional level, but the overall software architecture, component structures, and APIs were meticulously designed by me.<p>For a little background: I’ve been a professional systems engineer since 1992. I’ve worked as a senior system architect for several Fortune 500 companies and currently serve as Chief Scientist at a major telecom infrastructure provider. I also created one of the world's first real-time video encoding technologies for low-power mobile phones (in the pre-smartphone era). I'm no stranger to deep tech, and a deterministic layout VM is exactly the kind of strict, math-heavy system that simply cannot be effectively constructed with a few lines of AI prompts.

Show HN: uBlock filter list to blur all Instagram Reels

A filter list for uBO that blurs all video and non-follower content from Instagram. Works on mobile with uBO Lite.<p>related: <a href="https://news.ycombinator.com/item?id=47016443">https://news.ycombinator.com/item?id=47016443</a>

Show HN: Omni – Open-source workplace search and chat, built on Postgres

Hey HN!<p>Over the past few months, I've been working on building Omni - a workplace search and chat platform that connects to apps like Google Drive/Gmail, Slack, Confluence, etc. Essentially an open-source alternative to Glean, fully self-hosted.<p>I noticed that some orgs find Glean to be expensive and not very extensible. I wanted to build something that small to mid-size teams could run themselves, so I decided to build it all on Postgres (ParadeDB to be precise) and pgvector. No Elasticsearch, or dedicated vector databases. I figured Postgres is more than capable of handling the level of scale required.<p>To bring up Omni on your own infra, all it takes is a single `docker compose up`, and some basic configuration to connect your apps and LLMs.<p>What it does:<p>- Syncs data from all connected apps and builds a BM25 index (ParadeDB) and HNSW vector index (pgvector)<p>- Hybrid search combines results from both<p>- Chat UI where the LLM has tools to search the index - not just basic RAG<p>- Traditional search UI<p>- Users bring their own LLM provider (OpenAI/Anthropic/Gemini)<p>- Connectors for Google Workspace, Slack, Confluence, Jira, HubSpot, and more<p>- Connector SDK to build your own custom connectors<p>Omni is in beta right now, and I'd love your feedback, especially on the following:<p>- Has anyone tried self-hosting workplace search and/or AI tools, and what was your experience like?<p>- Any concerns with the Postgres-only approach at larger scales?<p>Happy to answer any questions!<p>The code: <a href="https://github.com/getomnico/omni" rel="nofollow">https://github.com/getomnico/omni</a> (Apache 2.0 licensed)

Show HN: Omni – Open-source workplace search and chat, built on Postgres

Hey HN!<p>Over the past few months, I've been working on building Omni - a workplace search and chat platform that connects to apps like Google Drive/Gmail, Slack, Confluence, etc. Essentially an open-source alternative to Glean, fully self-hosted.<p>I noticed that some orgs find Glean to be expensive and not very extensible. I wanted to build something that small to mid-size teams could run themselves, so I decided to build it all on Postgres (ParadeDB to be precise) and pgvector. No Elasticsearch, or dedicated vector databases. I figured Postgres is more than capable of handling the level of scale required.<p>To bring up Omni on your own infra, all it takes is a single `docker compose up`, and some basic configuration to connect your apps and LLMs.<p>What it does:<p>- Syncs data from all connected apps and builds a BM25 index (ParadeDB) and HNSW vector index (pgvector)<p>- Hybrid search combines results from both<p>- Chat UI where the LLM has tools to search the index - not just basic RAG<p>- Traditional search UI<p>- Users bring their own LLM provider (OpenAI/Anthropic/Gemini)<p>- Connectors for Google Workspace, Slack, Confluence, Jira, HubSpot, and more<p>- Connector SDK to build your own custom connectors<p>Omni is in beta right now, and I'd love your feedback, especially on the following:<p>- Has anyone tried self-hosting workplace search and/or AI tools, and what was your experience like?<p>- Any concerns with the Postgres-only approach at larger scales?<p>Happy to answer any questions!<p>The code: <a href="https://github.com/getomnico/omni" rel="nofollow">https://github.com/getomnico/omni</a> (Apache 2.0 licensed)

Show HN: Timber – Ollama for classical ML models, 336x faster than Python

Show HN: Govbase – Follow a bill from source text to news bias to social posts

Govbase tracks every bill, executive order, and federal regulation from official sources (Congress.gov, Federal Register, White House). An AI pipeline breaks each one down into plain-language summaries and shows who it impacts by demographic group.<p>It also ties each policy directly to bias-rated news coverage and politician social posts on X, Bluesky, and Truth Social. You can follow a single bill from the official text to how media frames it to what your representatives are saying about it.<p>Free on web, iOS, and Android.<p><a href="https://govbase.com" rel="nofollow">https://govbase.com</a><p>I'd love feedback from the community, especially on the data pipeline or what policy areas/features you feel are missing.

Show HN: I built a sub-500ms latency voice agent from scratch

I built a voice agent from scratch that averages ~400ms end-to-end latency (phone stop → first syllable). That’s with full STT → LLM → TTS in the loop, clean barge-ins, and no precomputed responses.<p>What moved the needle:<p>Voice is a turn-taking problem, not a transcription problem. VAD alone fails; you need semantic end-of-turn detection.<p>The system reduces to one loop: speaking vs listening. The two transitions - cancel instantly on barge-in, respond instantly on end-of-turn - define the experience.<p>STT → LLM → TTS must stream. Sequential pipelines are dead on arrival for natural conversation.<p>TTFT dominates everything. In voice, the first token is the critical path. Groq’s ~80ms TTFT was the single biggest win.<p>Geography matters more than prompts. Colocate everything or you lose before you start.<p>GitHub Repo: <a href="https://github.com/NickTikhonov/shuo" rel="nofollow">https://github.com/NickTikhonov/shuo</a><p>Follow whatever I next tinker with: <a href="https://x.com/nick_tikhonov" rel="nofollow">https://x.com/nick_tikhonov</a>

Show HN: I built a sub-500ms latency voice agent from scratch

I built a voice agent from scratch that averages ~400ms end-to-end latency (phone stop → first syllable). That’s with full STT → LLM → TTS in the loop, clean barge-ins, and no precomputed responses.<p>What moved the needle:<p>Voice is a turn-taking problem, not a transcription problem. VAD alone fails; you need semantic end-of-turn detection.<p>The system reduces to one loop: speaking vs listening. The two transitions - cancel instantly on barge-in, respond instantly on end-of-turn - define the experience.<p>STT → LLM → TTS must stream. Sequential pipelines are dead on arrival for natural conversation.<p>TTFT dominates everything. In voice, the first token is the critical path. Groq’s ~80ms TTFT was the single biggest win.<p>Geography matters more than prompts. Colocate everything or you lose before you start.<p>GitHub Repo: <a href="https://github.com/NickTikhonov/shuo" rel="nofollow">https://github.com/NickTikhonov/shuo</a><p>Follow whatever I next tinker with: <a href="https://x.com/nick_tikhonov" rel="nofollow">https://x.com/nick_tikhonov</a>

Show HN: Vertex.js – A 1kloc SPA Framework

Vertex is a 1kloc SPA framework containing everything you need from React, Ractive-Load and jQuery while still being jQuery-compatible.<p>vertex.js is a single, self-contained file with no build step and no dependencies.<p>Also exhibits the curious quality of being faster than over a decade of engineering at Facebook in some cases: <a href="https://files.catbox.moe/sqei0d.png" rel="nofollow">https://files.catbox.moe/sqei0d.png</a>

1 2 3 ... 947 948 949 >