The best Hacker News stories from Show from the past week
Latest posts:
Show HN: RISC-V assembly tabletop board game (hack your opponent)
I made this game to teach my daughter how buffer overflows work. I want her to look at programs as things she can change, and make them do whatever she wants.<p>Building your exploit in memory and jumping to it feels so cool. I hope this game teaches kids and programmers (who seem to have forgotten what computers actually are) that its quite fun to mess with programs. We used to have that excitement few years ago, just break into softice and change a branch into a nop and ignore the serial number check, or go to a different game level because this one is too annoying.<p>While working on the game I kept thinking what we have lost from 6502 to Apple Silicon, and the transition from 'personal computers' to 'you are completely not responsible for most the code running on your device', it made me a bit sad and happy in the same time, RISCV seems like a breath of fresh air, and many hackers will build many new things, new protocols, new networks, new programs. As PI4 cost increases, the esp32 cost is decreasing, we have transparent displays for 20$, good computers for 5$, cheap lora, and etc. Everything is more accessible than ever.<p>I played with a friend who saw completely different exploits than me, and I learned a lot just from few games, and because of the complexity of the game its often you enter into a position that you get surprised by your own actions :) So if you manage to find at least one friend who is not completely stunned by the assembler, I think you will have some good time.<p>A huge inspiration comes from phrack 49's 'Smashing The Stack For Fun And Profit' which has demystified the stack for me: <a href="http://phrack.org/issues/49/14.html#article" rel="nofollow noreferrer">http://phrack.org/issues/49/14.html#article</a><p>TLDR: computers are fun, and you can make them do things.<p>PS: In order to play with my friends I also built esp32 helper[1] that keeps track of the game state, and when I built it and wrote the code and everything I realized I could've just media queried the web version of the game.. but anyway, its way cooler to have a board game contraption.<p>[1]: <a href="https://punkx.org/overflow/esp32.html" rel="nofollow noreferrer">https://punkx.org/overflow/esp32.html</a>
Show HN: RISC-V assembly tabletop board game (hack your opponent)
I made this game to teach my daughter how buffer overflows work. I want her to look at programs as things she can change, and make them do whatever she wants.<p>Building your exploit in memory and jumping to it feels so cool. I hope this game teaches kids and programmers (who seem to have forgotten what computers actually are) that its quite fun to mess with programs. We used to have that excitement few years ago, just break into softice and change a branch into a nop and ignore the serial number check, or go to a different game level because this one is too annoying.<p>While working on the game I kept thinking what we have lost from 6502 to Apple Silicon, and the transition from 'personal computers' to 'you are completely not responsible for most the code running on your device', it made me a bit sad and happy in the same time, RISCV seems like a breath of fresh air, and many hackers will build many new things, new protocols, new networks, new programs. As PI4 cost increases, the esp32 cost is decreasing, we have transparent displays for 20$, good computers for 5$, cheap lora, and etc. Everything is more accessible than ever.<p>I played with a friend who saw completely different exploits than me, and I learned a lot just from few games, and because of the complexity of the game its often you enter into a position that you get surprised by your own actions :) So if you manage to find at least one friend who is not completely stunned by the assembler, I think you will have some good time.<p>A huge inspiration comes from phrack 49's 'Smashing The Stack For Fun And Profit' which has demystified the stack for me: <a href="http://phrack.org/issues/49/14.html#article" rel="nofollow noreferrer">http://phrack.org/issues/49/14.html#article</a><p>TLDR: computers are fun, and you can make them do things.<p>PS: In order to play with my friends I also built esp32 helper[1] that keeps track of the game state, and when I built it and wrote the code and everything I realized I could've just media queried the web version of the game.. but anyway, its way cooler to have a board game contraption.<p>[1]: <a href="https://punkx.org/overflow/esp32.html" rel="nofollow noreferrer">https://punkx.org/overflow/esp32.html</a>
Show HN: Carton – Run any ML model from any programming language
The goal of Carton is to let you use a single interface to run any machine learning model from any programming language.<p>It’s currently difficult to integrate models that use different technologies (e.g. TensorRT, Ludwig, TorchScript, JAX, GGML, etc) into your application, especially if you’re not using Python. Even if you learn the details of integrating each of these frameworks, running multiple frameworks in one process can cause hard-to-debug crashes.<p>Ideally, the ML framework a model was developed in should just be an implementation detail. Carton lets you decouple your application from specific ML frameworks so you can focus on the problem you actually want to solve.<p>At a high level, the way Carton works is by running models in their own processes and using an IPC system to communicate back and forth with low overhead. Carton is primarily implemented in Rust, with bindings to other languages. There are lots more details linked in the architecture doc below.<p>Importantly, Carton uses your model’s original underlying framework (e.g. PyTorch) under the hood to actually execute the model. This is meaningful because it makes Carton composable with other technologies. For example, it’s easy to use custom ops, TensorRT, etc without changes. This lets you keep up with cutting-edge advances, but decouples them from your application.<p>I’ve been working on Carton for almost a year now and I’m excited to open source it today!<p>Some useful links:<p>* Website, docs, quickstart - <a href="https://carton.run" rel="nofollow noreferrer">https://carton.run</a><p>* Explore existing models - <a href="https://carton.pub" rel="nofollow noreferrer">https://carton.pub</a><p>* Repo - <a href="https://github.com/VivekPanyam/carton">https://github.com/VivekPanyam/carton</a><p>* Architecture - <a href="https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE.md">https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE...</a><p>Please let me know what you think!
Show HN: Generative Fill with AI and 3D
Hey all,<p>You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.<p>What is Fill 3D?
Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.<p>How does it work?
1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image.
2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature.
3. Render: Use good ol' path tracing to render ultra-realistic results.<p>Why Fill 3D?
+ The results are insanely realistic (see video in the github repo, or on the website).
+ Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good.
+ Potential applications: I'm thinking of virtual staging in real estate media, what do you think?<p>Check it out at <a href="https://fill3d.ai" rel="nofollow noreferrer">https://fill3d.ai</a>
+ There's API access! :D
+ Right now, you need an image of an empty room. Will loosen this restriction over time.<p>Fill 3D is built on Function (<a href="https://fxn.ai" rel="nofollow noreferrer">https://fxn.ai</a>). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.<p>Tell me what you think!!<p>PS: This is my first Show HN, so please be nice :)
Show HN: Generative Fill with AI and 3D
Hey all,<p>You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.<p>What is Fill 3D?
Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.<p>How does it work?
1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image.
2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature.
3. Render: Use good ol' path tracing to render ultra-realistic results.<p>Why Fill 3D?
+ The results are insanely realistic (see video in the github repo, or on the website).
+ Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good.
+ Potential applications: I'm thinking of virtual staging in real estate media, what do you think?<p>Check it out at <a href="https://fill3d.ai" rel="nofollow noreferrer">https://fill3d.ai</a>
+ There's API access! :D
+ Right now, you need an image of an empty room. Will loosen this restriction over time.<p>Fill 3D is built on Function (<a href="https://fxn.ai" rel="nofollow noreferrer">https://fxn.ai</a>). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.<p>Tell me what you think!!<p>PS: This is my first Show HN, so please be nice :)
Show HN: A JavaScript function that looks and behaves like a pipe operator
Show HN: Magentic – Use LLMs as simple Python functions
This is a Python package that allows you to write function signatures to define LLM queries. This makes it easy to mix regular code with calls to LLMs, which enables you to use the LLM for its creativity and reasoning while also enforcing structure/logic as necessary. LLM output is parsed for you according to the return type annotation of the function, including complex return types such as streaming an array of structured objects.<p>I built this to show that we can think about using LLMs more fluidly than just chains and chats, i.e. more interchangeably with regular code, and to make it easy to do that.<p>Please let me know what you think! Contributions welcome.<p><a href="https://github.com/jackmpcollins/magentic">https://github.com/jackmpcollins/magentic</a>
Show HN: Magentic – Use LLMs as simple Python functions
This is a Python package that allows you to write function signatures to define LLM queries. This makes it easy to mix regular code with calls to LLMs, which enables you to use the LLM for its creativity and reasoning while also enforcing structure/logic as necessary. LLM output is parsed for you according to the return type annotation of the function, including complex return types such as streaming an array of structured objects.<p>I built this to show that we can think about using LLMs more fluidly than just chains and chats, i.e. more interchangeably with regular code, and to make it easy to do that.<p>Please let me know what you think! Contributions welcome.<p><a href="https://github.com/jackmpcollins/magentic">https://github.com/jackmpcollins/magentic</a>
Show HN: Unity like game editor running in pure WASM
In the wake of all the Unity nonsense, just wanted to toss the Raverie engine into this mix :)<p>We’re building off a previous engine that we worked on for DigiPen Institute of Technology called the Zero Engine with a similar component based design architecture to Unity. Our engine had a unique feature called Spaces: separate worlds/levels that you can instantiate and run at the same time, which became super useful for creating UI overlays using only game objects, running multiple simulations, etc. The lighting and rendering engine is scriptable, and the default deferred rendering implementation is based on the Unreal physically based rendering (PBR) approach. The physics engine was built from the ground up to handle both 2D and 3D physics together. The scripting language was also built in house to be a type safe language that binds to C++ objects and facilitates auto-complete (try it in editor!)<p>This particular fork by Raverie builds both the engine and editor to WebAssembly using only clang without Emscripten. We love Emscripten and in fact borrowed a tiny bit of exception code that we’d love to see up-streamed into LLVM, however we wanted to create a pure WASM binary without Emscripten bindings. We also love WASI too though we already had our own in memory virtual file system, hence we don’t use the WASI imports. All WASM imports and exports needed to run the engine are defined here:
<a href="https://github.com/raverie-us/raverie-engine/blob/main/Code/Foundation/Platform/PlatformCommunication.hpp">https://github.com/raverie-us/raverie-engine/blob/main/Code/...</a><p>The abstraction means that in the future, porting to other platforms that can support a WASM runtime should be trivial. It’s our dream to be able to export a build of your game to any platform, all from inside the browser. Our near term road-map includes getting the sound engine integrated with WebAudio, getting the script debugger working (currently freezes), porting our networking engine to WebRTC and WebSockets, and getting saving/loading from a database instead of browser local storage.<p>Our end goal is to use this engine to create an online Flash-like hub for games that people can share and remix, akin to Scratch or Tinkercad.<p><a href="https://github.com/raverie-us/raverie-engine">https://github.com/raverie-us/raverie-engine</a>
Show HN: Unity like game editor running in pure WASM
In the wake of all the Unity nonsense, just wanted to toss the Raverie engine into this mix :)<p>We’re building off a previous engine that we worked on for DigiPen Institute of Technology called the Zero Engine with a similar component based design architecture to Unity. Our engine had a unique feature called Spaces: separate worlds/levels that you can instantiate and run at the same time, which became super useful for creating UI overlays using only game objects, running multiple simulations, etc. The lighting and rendering engine is scriptable, and the default deferred rendering implementation is based on the Unreal physically based rendering (PBR) approach. The physics engine was built from the ground up to handle both 2D and 3D physics together. The scripting language was also built in house to be a type safe language that binds to C++ objects and facilitates auto-complete (try it in editor!)<p>This particular fork by Raverie builds both the engine and editor to WebAssembly using only clang without Emscripten. We love Emscripten and in fact borrowed a tiny bit of exception code that we’d love to see up-streamed into LLVM, however we wanted to create a pure WASM binary without Emscripten bindings. We also love WASI too though we already had our own in memory virtual file system, hence we don’t use the WASI imports. All WASM imports and exports needed to run the engine are defined here:
<a href="https://github.com/raverie-us/raverie-engine/blob/main/Code/Foundation/Platform/PlatformCommunication.hpp">https://github.com/raverie-us/raverie-engine/blob/main/Code/...</a><p>The abstraction means that in the future, porting to other platforms that can support a WASM runtime should be trivial. It’s our dream to be able to export a build of your game to any platform, all from inside the browser. Our near term road-map includes getting the sound engine integrated with WebAudio, getting the script debugger working (currently freezes), porting our networking engine to WebRTC and WebSockets, and getting saving/loading from a database instead of browser local storage.<p>Our end goal is to use this engine to create an online Flash-like hub for games that people can share and remix, akin to Scratch or Tinkercad.<p><a href="https://github.com/raverie-us/raverie-engine">https://github.com/raverie-us/raverie-engine</a>
Show HN: Minum – A minimal Java web framework
I am happy to announce my minimalist zero-dependency web framework, Minum, is out of beta.<p><a href="http://github.com/byronka/minum">http://github.com/byronka/minum</a><p>You will be hard-pressed to find another modern project as obsessively minimalistic. Other frameworks will claim simplicity and minimalism and then, casually, mention they are built on a multitude of libraries. This follows self-imposed constraints, predicated on a belief that smaller and lighter is long-term better.<p>Caveat emptor: This is a project by and for developers who know and like programming (rather than, let us say, configuring). It is written in Java, and presumes familiarity with the HTTP/HTML paradigm.<p>Driving paradigms of this project:<p>* ease of use
* maintainability / sustainability
* simplicity
* performance
* good documentation
* good testing<p>It requires Java 21, for its virtual threads (Project Loom)
Show HN: Minum – A minimal Java web framework
I am happy to announce my minimalist zero-dependency web framework, Minum, is out of beta.<p><a href="http://github.com/byronka/minum">http://github.com/byronka/minum</a><p>You will be hard-pressed to find another modern project as obsessively minimalistic. Other frameworks will claim simplicity and minimalism and then, casually, mention they are built on a multitude of libraries. This follows self-imposed constraints, predicated on a belief that smaller and lighter is long-term better.<p>Caveat emptor: This is a project by and for developers who know and like programming (rather than, let us say, configuring). It is written in Java, and presumes familiarity with the HTTP/HTML paradigm.<p>Driving paradigms of this project:<p>* ease of use
* maintainability / sustainability
* simplicity
* performance
* good documentation
* good testing<p>It requires Java 21, for its virtual threads (Project Loom)
Show HN: E-Ink Day Schedule
Show HN: E-Ink Day Schedule
Show HN: Get your entire ChatGPT history in Markdown files
This is just a small thing I coded to help me see my entire convo history in beautiful markdown, in Obsidian (my note-taking app).<p>[Link to Github repo](<a href="https://github.com/mohamed-chs/chatgpt-history-export-to-md">https://github.com/mohamed-chs/chatgpt-history-export-to-md</a>).<p>The Python script helps you to convert conversations extracted from ChatGPT (ZIP export all your data, sent by Openai) into neatly formatted Markdown files.<p>Also adds YAML metadata headers and includes Code interpreter (Advanced data analysis) intput / output code.<p>Feel free to fork the repo and implement your own improvements, I feel like there's alot more to be extracted from the data. Any feedback or contributions are welcome !<p>I found chrome extensions to be a bit slow and sometimes overkill for this, although I did enjoy the folder system in some of them.<p>[Link to first post](<a href="https://www.reddit.com/r/ChatGPT/comments/16k1ub5/i_made_a_simple_chatgpt_history_to_markdown/" rel="nofollow noreferrer">https://www.reddit.com/r/ChatGPT/comments/16k1ub5/i_made_a_s...</a>)
Show HN: Get your entire ChatGPT history in Markdown files
This is just a small thing I coded to help me see my entire convo history in beautiful markdown, in Obsidian (my note-taking app).<p>[Link to Github repo](<a href="https://github.com/mohamed-chs/chatgpt-history-export-to-md">https://github.com/mohamed-chs/chatgpt-history-export-to-md</a>).<p>The Python script helps you to convert conversations extracted from ChatGPT (ZIP export all your data, sent by Openai) into neatly formatted Markdown files.<p>Also adds YAML metadata headers and includes Code interpreter (Advanced data analysis) intput / output code.<p>Feel free to fork the repo and implement your own improvements, I feel like there's alot more to be extracted from the data. Any feedback or contributions are welcome !<p>I found chrome extensions to be a bit slow and sometimes overkill for this, although I did enjoy the folder system in some of them.<p>[Link to first post](<a href="https://www.reddit.com/r/ChatGPT/comments/16k1ub5/i_made_a_simple_chatgpt_history_to_markdown/" rel="nofollow noreferrer">https://www.reddit.com/r/ChatGPT/comments/16k1ub5/i_made_a_s...</a>)
Show HN: Rapidpages – OSS alternative to vercel's v0
Hey everyone,<p>Really excited to share what I've been working on. Rapidpages is a prompt-first online IDE, think midjourney for front-end developers. I've been working on this for a while and it's great to see some interest from companies like Vercel in this space.<p>All you need for self-hosting is an OpenAI key and a GitHub oauth app. Simply clone the repo and play with it. It's also available on the cloud at www.rapidpages.io<p>Please give it a try and let me know if you have any feedback, and if you like what I'm doing with Rapidpages, please give it a star on GitHub.<p>Thanks!
Show HN: Learn piano without sheet music
I always found sheet music way too hard to read - and I literally spent a year at a company building a sheet music rendering engine. I wanted an app that would display music like the tutorials on YouTube, but not be focused on upselling lessons etc. like most current apps, and also would let me import my own files<p>This works on MIDI files. If it’s a valid midi it probably plays.<p>Since releasing, I did add a subscription for classical music - on a theory that most normal users don’t know what a midi file is. It changed about a month ago from an up front price to in app purchases and/or a subscription - which has absolutely tanked revenue so far - but maybe it will pick up<p>Would love to hear your thoughts and if you have any suggestions!
Show HN: Learn piano without sheet music
I always found sheet music way too hard to read - and I literally spent a year at a company building a sheet music rendering engine. I wanted an app that would display music like the tutorials on YouTube, but not be focused on upselling lessons etc. like most current apps, and also would let me import my own files<p>This works on MIDI files. If it’s a valid midi it probably plays.<p>Since releasing, I did add a subscription for classical music - on a theory that most normal users don’t know what a midi file is. It changed about a month ago from an up front price to in app purchases and/or a subscription - which has absolutely tanked revenue so far - but maybe it will pick up<p>Would love to hear your thoughts and if you have any suggestions!
Show HN: Paisa – Open-Source Personal Finance Manager
I have been using plaintext accounting for some time and had a duct-taped together reporting system. Paisa is my latest attempt at making it usable for others.<p>I am interested in knowing what people normally want to understand about their finances<p>PS: Please avoid editing the demo data. Download and run locally if you want to edit.