The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I taught AI to commentate Pong in real time

Show HN: Use Third Party LLM API in JetBrains AI Assistant

Show HN: Use Third Party LLM API in JetBrains AI Assistant

Show HN: Use Third Party LLM API in JetBrains AI Assistant

Show HN: Free, in-browser PDF editor

Add text, input boxes, pictures, signatures, delete pages, merge PDFs and password protect them. All happening in the browser, 100% free and no sign-up.

Show HN: Free, in-browser PDF editor

Add text, input boxes, pictures, signatures, delete pages, merge PDFs and password protect them. All happening in the browser, 100% free and no sign-up.

Show HN: Free, in-browser PDF editor

Add text, input boxes, pictures, signatures, delete pages, merge PDFs and password protect them. All happening in the browser, 100% free and no sign-up.

Show HN: Hyperparam: OSS tools for exploring datasets locally in the browser

For the last year I’ve been developing Hyperparam — a collection of small, fast, dependency-free open-source libraries designed for data scientists and ML engineers to actually look at their data.<p>- Hyparquet: Read any Parquet file in browser/node.js<p>- Icebird: Explore Iceberg tables without needing Spark/Presto<p>- HighTable: Virtual scrolling of millions of rows<p>- Hyparquet-Writer: Export Parquet easily from JS<p>- Hyllama: Read llama.cpp .gguf LLM metadata efficiently<p>CLI for viewing local files: npx hyperparam dataset.parquet<p>Example dataset on Hugging Face Space: <a href="https://huggingface.co/spaces/hyperparam/hyperparam?url=https://huggingface.co/datasets/glaiveai/reasoning-v1-20m/blob/refs/convert/parquet/default/train/0000.parquet" rel="nofollow">https://huggingface.co/spaces/hyperparam/hyperparam?url=http...</a><p>No cloud uploads. No backend servers. A better way to build frontend data applications.<p>GitHub: <a href="https://github.com/hyparam">https://github.com/hyparam</a> Feedback and PRs welcome!

Show HN: Hyperparam: OSS tools for exploring datasets locally in the browser

For the last year I’ve been developing Hyperparam — a collection of small, fast, dependency-free open-source libraries designed for data scientists and ML engineers to actually look at their data.<p>- Hyparquet: Read any Parquet file in browser/node.js<p>- Icebird: Explore Iceberg tables without needing Spark/Presto<p>- HighTable: Virtual scrolling of millions of rows<p>- Hyparquet-Writer: Export Parquet easily from JS<p>- Hyllama: Read llama.cpp .gguf LLM metadata efficiently<p>CLI for viewing local files: npx hyperparam dataset.parquet<p>Example dataset on Hugging Face Space: <a href="https://huggingface.co/spaces/hyperparam/hyperparam?url=https://huggingface.co/datasets/glaiveai/reasoning-v1-20m/blob/refs/convert/parquet/default/train/0000.parquet" rel="nofollow">https://huggingface.co/spaces/hyperparam/hyperparam?url=http...</a><p>No cloud uploads. No backend servers. A better way to build frontend data applications.<p>GitHub: <a href="https://github.com/hyparam">https://github.com/hyparam</a> Feedback and PRs welcome!

Show HN: OSle – A 510 bytes OS in x86 assembly

(sorry about double posting, I forgot to put Show HN in front in the original <a href="https://news.ycombinator.com/item?id=43863689">https://news.ycombinator.com/item?id=43863689</a> thread)<p>Hey all, As a follow up to my relatively successful series in x86 Assembly of last year[1], I started making an OS that fits in a boot sector. I am purposefully not doing chain loading or multi-stage to see how much I can squeeze out of 510bytes.<p>It comes with a file system, a shell, and a simple process management. Enough to write non-trivial guest applications, like a text editor and even some games. It's a lot of fun!<p>It comes with an SDK and you can play around with it in the browser to see what it looks like.<p>The aim is, as always, to make Assembly less scary and this time around also OS development.<p>[1]: <a href="https://news.ycombinator.com/item?id=41571971">https://news.ycombinator.com/item?id=41571971</a>

Show HN: OSle – A 510 bytes OS in x86 assembly

(sorry about double posting, I forgot to put Show HN in front in the original <a href="https://news.ycombinator.com/item?id=43863689">https://news.ycombinator.com/item?id=43863689</a> thread)<p>Hey all, As a follow up to my relatively successful series in x86 Assembly of last year[1], I started making an OS that fits in a boot sector. I am purposefully not doing chain loading or multi-stage to see how much I can squeeze out of 510bytes.<p>It comes with a file system, a shell, and a simple process management. Enough to write non-trivial guest applications, like a text editor and even some games. It's a lot of fun!<p>It comes with an SDK and you can play around with it in the browser to see what it looks like.<p>The aim is, as always, to make Assembly less scary and this time around also OS development.<p>[1]: <a href="https://news.ycombinator.com/item?id=41571971">https://news.ycombinator.com/item?id=41571971</a>

Show HN: OSle – A 510 bytes OS in x86 assembly

(sorry about double posting, I forgot to put Show HN in front in the original <a href="https://news.ycombinator.com/item?id=43863689">https://news.ycombinator.com/item?id=43863689</a> thread)<p>Hey all, As a follow up to my relatively successful series in x86 Assembly of last year[1], I started making an OS that fits in a boot sector. I am purposefully not doing chain loading or multi-stage to see how much I can squeeze out of 510bytes.<p>It comes with a file system, a shell, and a simple process management. Enough to write non-trivial guest applications, like a text editor and even some games. It's a lot of fun!<p>It comes with an SDK and you can play around with it in the browser to see what it looks like.<p>The aim is, as always, to make Assembly less scary and this time around also OS development.<p>[1]: <a href="https://news.ycombinator.com/item?id=41571971">https://news.ycombinator.com/item?id=41571971</a>

Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents

Hi HN!<p>BLAST is a high-performance serving engine for browser-augmented LLMs, designed to make deploying web-browsing AI easy, fast, and cost-manageable.<p>The goal with BLAST is to ultimately achieve google search level latencies for tasks that currently require a lot of typing and clicking around inside a browser. We're starting off with automatic parallelism, prefix caching, budgeting (memory and LLM cost), and an OpenAI-Compatible API but have a ton of ideas in the pipe!<p>Website & Docs: <a href="https://blastproject.org/" rel="nofollow">https://blastproject.org/</a> <a href="https://docs.blastproject.org/" rel="nofollow">https://docs.blastproject.org/</a><p>MIT-Licensed Open-Source: <a href="https://github.com/stanford-mast/blast">https://github.com/stanford-mast/blast</a><p>Hope some folks here find this useful! Please let me know what you think in the comments or ping me on Discord.<p>— Caleb (PhD student @ Stanford CS)

Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents

Hi HN!<p>BLAST is a high-performance serving engine for browser-augmented LLMs, designed to make deploying web-browsing AI easy, fast, and cost-manageable.<p>The goal with BLAST is to ultimately achieve google search level latencies for tasks that currently require a lot of typing and clicking around inside a browser. We're starting off with automatic parallelism, prefix caching, budgeting (memory and LLM cost), and an OpenAI-Compatible API but have a ton of ideas in the pipe!<p>Website & Docs: <a href="https://blastproject.org/" rel="nofollow">https://blastproject.org/</a> <a href="https://docs.blastproject.org/" rel="nofollow">https://docs.blastproject.org/</a><p>MIT-Licensed Open-Source: <a href="https://github.com/stanford-mast/blast">https://github.com/stanford-mast/blast</a><p>Hope some folks here find this useful! Please let me know what you think in the comments or ping me on Discord.<p>— Caleb (PhD student @ Stanford CS)

Show HN: Blast – Fast, multi-threaded serving engine for web browsing AI agents

Hi HN!<p>BLAST is a high-performance serving engine for browser-augmented LLMs, designed to make deploying web-browsing AI easy, fast, and cost-manageable.<p>The goal with BLAST is to ultimately achieve google search level latencies for tasks that currently require a lot of typing and clicking around inside a browser. We're starting off with automatic parallelism, prefix caching, budgeting (memory and LLM cost), and an OpenAI-Compatible API but have a ton of ideas in the pipe!<p>Website & Docs: <a href="https://blastproject.org/" rel="nofollow">https://blastproject.org/</a> <a href="https://docs.blastproject.org/" rel="nofollow">https://docs.blastproject.org/</a><p>MIT-Licensed Open-Source: <a href="https://github.com/stanford-mast/blast">https://github.com/stanford-mast/blast</a><p>Hope some folks here find this useful! Please let me know what you think in the comments or ping me on Discord.<p>— Caleb (PhD student @ Stanford CS)

Show HN: GPT-2 implemented using graphics shaders

Back in the old days, people used to do general-purpose GPU programming by using shaders like GLSL. This is what inspired NVIDIA (and other companies) to eventually create CUDA (and friends). This is an implementation of GPT-2 using WebGL and shaders. Enjoy!

Show HN: GPT-2 implemented using graphics shaders

Back in the old days, people used to do general-purpose GPU programming by using shaders like GLSL. This is what inspired NVIDIA (and other companies) to eventually create CUDA (and friends). This is an implementation of GPT-2 using WebGL and shaders. Enjoy!

Show HN: GPT-2 implemented using graphics shaders

Back in the old days, people used to do general-purpose GPU programming by using shaders like GLSL. This is what inspired NVIDIA (and other companies) to eventually create CUDA (and friends). This is an implementation of GPT-2 using WebGL and shaders. Enjoy!

Show HN: I built a synthesizer based on 3D physics

I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.<p>So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.<p>The official launch announcement video is here: <a href="https://www.youtube.com/watch?v=NYX_eeNVIEU" rel="nofollow">https://www.youtube.com/watch?v=NYX_eeNVIEU</a><p>But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: <a href="https://x.com/Mick_Gordon/status/1918146487948919222" rel="nofollow">https://x.com/Mick_Gordon/status/1918146487948919222</a><p>I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here: <a href="https://anukari.com/blog/devlog" rel="nofollow">https://anukari.com/blog/devlog</a><p>I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: <a href="https://www.youtube.com/watch?v=lb8b1SYy73Q" rel="nofollow">https://www.youtube.com/watch?v=lb8b1SYy73Q</a>

Show HN: I built a synthesizer based on 3D physics

I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.<p>So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.<p>The official launch announcement video is here: <a href="https://www.youtube.com/watch?v=NYX_eeNVIEU" rel="nofollow">https://www.youtube.com/watch?v=NYX_eeNVIEU</a><p>But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: <a href="https://x.com/Mick_Gordon/status/1918146487948919222" rel="nofollow">https://x.com/Mick_Gordon/status/1918146487948919222</a><p>I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here: <a href="https://anukari.com/blog/devlog" rel="nofollow">https://anukari.com/blog/devlog</a><p>I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: <a href="https://www.youtube.com/watch?v=lb8b1SYy73Q" rel="nofollow">https://www.youtube.com/watch?v=lb8b1SYy73Q</a>

< 1 2 3 ... 16 17 18 19 20 ... 815 816 817 >