The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: RISC-V assembly tabletop board game (hack your opponent)

I made this game to teach my daughter how buffer overflows work. I want her to look at programs as things she can change, and make them do whatever she wants.<p>Building your exploit in memory and jumping to it feels so cool. I hope this game teaches kids and programmers (who seem to have forgotten what computers actually are) that its quite fun to mess with programs. We used to have that excitement few years ago, just break into softice and change a branch into a nop and ignore the serial number check, or go to a different game level because this one is too annoying.<p>While working on the game I kept thinking what we have lost from 6502 to Apple Silicon, and the transition from 'personal computers' to 'you are completely not responsible for most the code running on your device', it made me a bit sad and happy in the same time, RISCV seems like a breath of fresh air, and many hackers will build many new things, new protocols, new networks, new programs. As PI4 cost increases, the esp32 cost is decreasing, we have transparent displays for 20$, good computers for 5$, cheap lora, and etc. Everything is more accessible than ever.<p>I played with a friend who saw completely different exploits than me, and I learned a lot just from few games, and because of the complexity of the game its often you enter into a position that you get surprised by your own actions :) So if you manage to find at least one friend who is not completely stunned by the assembler, I think you will have some good time.<p>A huge inspiration comes from phrack 49's 'Smashing The Stack For Fun And Profit' which has demystified the stack for me: <a href="http://phrack.org/issues/49/14.html#article" rel="nofollow noreferrer">http://phrack.org/issues/49/14.html#article</a><p>TLDR: computers are fun, and you can make them do things.<p>PS: In order to play with my friends I also built esp32 helper[1] that keeps track of the game state, and when I built it and wrote the code and everything I realized I could've just media queried the web version of the game.. but anyway, its way cooler to have a board game contraption.<p>[1]: <a href="https://punkx.org/overflow/esp32.html" rel="nofollow noreferrer">https://punkx.org/overflow/esp32.html</a>

Show HN: Mel – The missing unsubscribe button for physical mail

Since 2017, I've been on a mission to stop the deluge of paper mail that ends up in my trash. It's been a slow, manual process, but I've succeeded! I created Mel to help others rid themselves of physical junk mail.<p>Simply text a photo of junk mail and Mel contacts the sender to have you removed from their list – no registration or app required.

Show HN: Nav – A terminal navigator for interactive `ls` workflows

Hi Everyone,<p>I built a tool for interactive navigation in the terminal that is intended to replace the all-too-familiar cycle of `ls` to view a directory, followed by `cd`, then `ls`, and repeat.<p>nav is a terminal filesystem explorer built for interactive `ls` workflows. The key features I wanted to enable are interactivity and search without feeling like I'm using anything other than `ls`. nav supports common `ls` options/flags, as well as tab completion, and might expand its support for less common options in the future. These options exist as both CLI flags and interactive toggles.<p>nav works as a standalone tool or in a bash/zsh pipe or subshell to e.g., change directories, copy a file name to the clipboard, etc. For example, I use the simple functions from the README in my .zshrc for interactive `cd` and copy-to-clipboard workflows.<p>nav was inspired by the discussion of the excellent `walk` [0] tool and was written from the ground up to support its `ls`-centric interactive feature set. I hope you might find it useful and I'd love to take any feedback or suggestions that might come to mind!<p>[0] <a href="https://news.ycombinator.com/item?id=37220828">https://news.ycombinator.com/item?id=37220828</a>

Show HN: Nav – A terminal navigator for interactive `ls` workflows

Hi Everyone,<p>I built a tool for interactive navigation in the terminal that is intended to replace the all-too-familiar cycle of `ls` to view a directory, followed by `cd`, then `ls`, and repeat.<p>nav is a terminal filesystem explorer built for interactive `ls` workflows. The key features I wanted to enable are interactivity and search without feeling like I'm using anything other than `ls`. nav supports common `ls` options/flags, as well as tab completion, and might expand its support for less common options in the future. These options exist as both CLI flags and interactive toggles.<p>nav works as a standalone tool or in a bash/zsh pipe or subshell to e.g., change directories, copy a file name to the clipboard, etc. For example, I use the simple functions from the README in my .zshrc for interactive `cd` and copy-to-clipboard workflows.<p>nav was inspired by the discussion of the excellent `walk` [0] tool and was written from the ground up to support its `ls`-centric interactive feature set. I hope you might find it useful and I'd love to take any feedback or suggestions that might come to mind!<p>[0] <a href="https://news.ycombinator.com/item?id=37220828">https://news.ycombinator.com/item?id=37220828</a>

Show HN: Abuse inflight WiFi APIs to track your flight

Show HN: Abuse inflight WiFi APIs to track your flight

Show HN: Carton – Run any ML model from any programming language

The goal of Carton is to let you use a single interface to run any machine learning model from any programming language.<p>It’s currently difficult to integrate models that use different technologies (e.g. TensorRT, Ludwig, TorchScript, JAX, GGML, etc) into your application, especially if you’re not using Python. Even if you learn the details of integrating each of these frameworks, running multiple frameworks in one process can cause hard-to-debug crashes.<p>Ideally, the ML framework a model was developed in should just be an implementation detail. Carton lets you decouple your application from specific ML frameworks so you can focus on the problem you actually want to solve.<p>At a high level, the way Carton works is by running models in their own processes and using an IPC system to communicate back and forth with low overhead. Carton is primarily implemented in Rust, with bindings to other languages. There are lots more details linked in the architecture doc below.<p>Importantly, Carton uses your model’s original underlying framework (e.g. PyTorch) under the hood to actually execute the model. This is meaningful because it makes Carton composable with other technologies. For example, it’s easy to use custom ops, TensorRT, etc without changes. This lets you keep up with cutting-edge advances, but decouples them from your application.<p>I’ve been working on Carton for almost a year now and I’m excited to open source it today!<p>Some useful links:<p>* Website, docs, quickstart - <a href="https://carton.run" rel="nofollow noreferrer">https://carton.run</a><p>* Explore existing models - <a href="https://carton.pub" rel="nofollow noreferrer">https://carton.pub</a><p>* Repo - <a href="https://github.com/VivekPanyam/carton">https://github.com/VivekPanyam/carton</a><p>* Architecture - <a href="https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE.md">https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE...</a><p>Please let me know what you think!

Show HN: Carton – Run any ML model from any programming language

The goal of Carton is to let you use a single interface to run any machine learning model from any programming language.<p>It’s currently difficult to integrate models that use different technologies (e.g. TensorRT, Ludwig, TorchScript, JAX, GGML, etc) into your application, especially if you’re not using Python. Even if you learn the details of integrating each of these frameworks, running multiple frameworks in one process can cause hard-to-debug crashes.<p>Ideally, the ML framework a model was developed in should just be an implementation detail. Carton lets you decouple your application from specific ML frameworks so you can focus on the problem you actually want to solve.<p>At a high level, the way Carton works is by running models in their own processes and using an IPC system to communicate back and forth with low overhead. Carton is primarily implemented in Rust, with bindings to other languages. There are lots more details linked in the architecture doc below.<p>Importantly, Carton uses your model’s original underlying framework (e.g. PyTorch) under the hood to actually execute the model. This is meaningful because it makes Carton composable with other technologies. For example, it’s easy to use custom ops, TensorRT, etc without changes. This lets you keep up with cutting-edge advances, but decouples them from your application.<p>I’ve been working on Carton for almost a year now and I’m excited to open source it today!<p>Some useful links:<p>* Website, docs, quickstart - <a href="https://carton.run" rel="nofollow noreferrer">https://carton.run</a><p>* Explore existing models - <a href="https://carton.pub" rel="nofollow noreferrer">https://carton.pub</a><p>* Repo - <a href="https://github.com/VivekPanyam/carton">https://github.com/VivekPanyam/carton</a><p>* Architecture - <a href="https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE.md">https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE...</a><p>Please let me know what you think!

Show HN: Carton – Run any ML model from any programming language

The goal of Carton is to let you use a single interface to run any machine learning model from any programming language.<p>It’s currently difficult to integrate models that use different technologies (e.g. TensorRT, Ludwig, TorchScript, JAX, GGML, etc) into your application, especially if you’re not using Python. Even if you learn the details of integrating each of these frameworks, running multiple frameworks in one process can cause hard-to-debug crashes.<p>Ideally, the ML framework a model was developed in should just be an implementation detail. Carton lets you decouple your application from specific ML frameworks so you can focus on the problem you actually want to solve.<p>At a high level, the way Carton works is by running models in their own processes and using an IPC system to communicate back and forth with low overhead. Carton is primarily implemented in Rust, with bindings to other languages. There are lots more details linked in the architecture doc below.<p>Importantly, Carton uses your model’s original underlying framework (e.g. PyTorch) under the hood to actually execute the model. This is meaningful because it makes Carton composable with other technologies. For example, it’s easy to use custom ops, TensorRT, etc without changes. This lets you keep up with cutting-edge advances, but decouples them from your application.<p>I’ve been working on Carton for almost a year now and I’m excited to open source it today!<p>Some useful links:<p>* Website, docs, quickstart - <a href="https://carton.run" rel="nofollow noreferrer">https://carton.run</a><p>* Explore existing models - <a href="https://carton.pub" rel="nofollow noreferrer">https://carton.pub</a><p>* Repo - <a href="https://github.com/VivekPanyam/carton">https://github.com/VivekPanyam/carton</a><p>* Architecture - <a href="https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE.md">https://github.com/VivekPanyam/carton/blob/main/ARCHITECTURE...</a><p>Please let me know what you think!

Show HN: Generative Fill with AI and 3D

Hey all,<p>You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.<p>What is Fill 3D? Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.<p>How does it work? 1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image. 2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature. 3. Render: Use good ol' path tracing to render ultra-realistic results.<p>Why Fill 3D? + The results are insanely realistic (see video in the github repo, or on the website). + Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good. + Potential applications: I'm thinking of virtual staging in real estate media, what do you think?<p>Check it out at <a href="https://fill3d.ai" rel="nofollow noreferrer">https://fill3d.ai</a> + There's API access! :D + Right now, you need an image of an empty room. Will loosen this restriction over time.<p>Fill 3D is built on Function (<a href="https://fxn.ai" rel="nofollow noreferrer">https://fxn.ai</a>). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.<p>Tell me what you think!!<p>PS: This is my first Show HN, so please be nice :)

Show HN: Generative Fill with AI and 3D

Hey all,<p>You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.<p>What is Fill 3D? Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.<p>How does it work? 1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image. 2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature. 3. Render: Use good ol' path tracing to render ultra-realistic results.<p>Why Fill 3D? + The results are insanely realistic (see video in the github repo, or on the website). + Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good. + Potential applications: I'm thinking of virtual staging in real estate media, what do you think?<p>Check it out at <a href="https://fill3d.ai" rel="nofollow noreferrer">https://fill3d.ai</a> + There's API access! :D + Right now, you need an image of an empty room. Will loosen this restriction over time.<p>Fill 3D is built on Function (<a href="https://fxn.ai" rel="nofollow noreferrer">https://fxn.ai</a>). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.<p>Tell me what you think!!<p>PS: This is my first Show HN, so please be nice :)

Show HN: Generative Fill with AI and 3D

Hey all,<p>You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.<p>What is Fill 3D? Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.<p>How does it work? 1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image. 2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature. 3. Render: Use good ol' path tracing to render ultra-realistic results.<p>Why Fill 3D? + The results are insanely realistic (see video in the github repo, or on the website). + Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good. + Potential applications: I'm thinking of virtual staging in real estate media, what do you think?<p>Check it out at <a href="https://fill3d.ai" rel="nofollow noreferrer">https://fill3d.ai</a> + There's API access! :D + Right now, you need an image of an empty room. Will loosen this restriction over time.<p>Fill 3D is built on Function (<a href="https://fxn.ai" rel="nofollow noreferrer">https://fxn.ai</a>). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.<p>Tell me what you think!!<p>PS: This is my first Show HN, so please be nice :)

Show HN: Generative Fill with AI and 3D

Hey all,<p>You've probably seen projects that add objects to an image from a style or text prompt, like InteriorAI (levelsio) and Adobe Firefly. The prevalent issue with these diffusion-based inpainting approaches is that they don't yet have great conditioning on lighting, perspective, and structure. You'll often get incorrect or generic shadows; warped-looking objects; and distorted backgrounds.<p>What is Fill 3D? Fill 3D is an exploration on doing generative fill in 3D to render ultra-realistic results that harmonize with the background image, using industry-standard path tracing, akin to compositing in Hollywood movies.<p>How does it work? 1. Deproject: First, deproject an image to a 3D shell using both geometric and photometric cues from the input image. 2. Place: Draw rectangles and describe what you want in them, akin to Photoshop's Generative Fill feature. 3. Render: Use good ol' path tracing to render ultra-realistic results.<p>Why Fill 3D? + The results are insanely realistic (see video in the github repo, or on the website). + Fast enough: Currently, generations take 40-80 seconds. Diffusion takes ~10seconds, so we're slower, but for the level of realism, it's pretty good. + Potential applications: I'm thinking of virtual staging in real estate media, what do you think?<p>Check it out at <a href="https://fill3d.ai" rel="nofollow noreferrer">https://fill3d.ai</a> + There's API access! :D + Right now, you need an image of an empty room. Will loosen this restriction over time.<p>Fill 3D is built on Function (<a href="https://fxn.ai" rel="nofollow noreferrer">https://fxn.ai</a>). With Function, I can run the Python functions that do the steps above on powerful GPUs with only code (no Dockerfile, YAML, k8s, etc), and invoke them from just about anywhere. I'm the founder of fxn.<p>Tell me what you think!!<p>PS: This is my first Show HN, so please be nice :)

Show HN: Use ChatGPT with Apple Shortcuts

This project was born from my passion for every Apple product and the entire ecosystem, along with the power of ChatGPT.<p>I was tired of copy-pasting and switching between apps on my Mac or iPhone, so I had this crazy idea to bring ChatGPT into every app that I use.<p>This was possible only with the Apple Shortcuts app. Very few people know about its power and potential, so I took this chance and built COPILOT.<p><a href="https://meetcopilot.app" rel="nofollow noreferrer">https://meetcopilot.app</a><p>But I also loved the idea of AI agents using various tools so much that I leveraged OpenAI's function-calling feature to accomplish that.<p>Now, no matter what app I am using on my iPhone or Mac, the selected text or the current webpage from Safari will be passed automatically to COPILOT. Then I just ask it whatever I need and watch until it reaches the goal autonomously.<p>There was also another problem with ChatGPT - the knowledge cutoff. So I also integrated Google search and web scraping into it. Now, whenever my request needs real-time information, like what is the latest version of macOS, it will use these tools to gather the data and then give me the correct response.<p>Being an app built with Apple Shortcuts, all its "code", called actions, is actually open-source. I'm selling a Premium version of it to earn some revenue for my time.<p>I would love to hear all your thoughts on it! Thank you so much!

Show HN: Use ChatGPT with Apple Shortcuts

This project was born from my passion for every Apple product and the entire ecosystem, along with the power of ChatGPT.<p>I was tired of copy-pasting and switching between apps on my Mac or iPhone, so I had this crazy idea to bring ChatGPT into every app that I use.<p>This was possible only with the Apple Shortcuts app. Very few people know about its power and potential, so I took this chance and built COPILOT.<p><a href="https://meetcopilot.app" rel="nofollow noreferrer">https://meetcopilot.app</a><p>But I also loved the idea of AI agents using various tools so much that I leveraged OpenAI's function-calling feature to accomplish that.<p>Now, no matter what app I am using on my iPhone or Mac, the selected text or the current webpage from Safari will be passed automatically to COPILOT. Then I just ask it whatever I need and watch until it reaches the goal autonomously.<p>There was also another problem with ChatGPT - the knowledge cutoff. So I also integrated Google search and web scraping into it. Now, whenever my request needs real-time information, like what is the latest version of macOS, it will use these tools to gather the data and then give me the correct response.<p>Being an app built with Apple Shortcuts, all its "code", called actions, is actually open-source. I'm selling a Premium version of it to earn some revenue for my time.<p>I would love to hear all your thoughts on it! Thank you so much!

Show HN: XRain – Explore rainfall statistics around the world

Last year I launched a website that allow people to see rainfall statistics that are based on satellite data. Historical rainfall information usually comes from rain gauges, and while these are fantastic there are many parts of the world that don't have many long term gauges, or where that data is hard to access. Satellite data can be an invaluable source of information for those data-scarce areas.<p>The business model is to sell "extreme precipitation" data that can be used for flood modelling. Unfortunately, after a year I still haven't made a single sale. I've tried various ways of advertising, mainly via messaging people on LinkedIn who would actually have a use for it. I'm still proud of what I've built, even if it's a flop!<p>The tech stack is SolidJS with a Django API backend.<p>Fun feature: jump to a completely random part of the world by clicking the "Random" button.<p>I'd love feedback on anything, such as how to improve the UI/UX of the mobile view of the map page.

Show HN: The Tomb of Ramesses I in the Valley of the Kings

Hey all, I’m coding up a new tour system for my 3d Egyptian work. After the last feedback, I focused on building more interactive content and fx into the guided tours with sound and telling the mythology in the wall art.<p>I’d love feedback with the new version – this is built with vanilla Three.js and footage captured on my iPhone 12. For various fx, I coded many of my own shaders based on work by <a href="https://twitter.com/akella" rel="nofollow noreferrer">https://twitter.com/akella</a> and others on ShaderToy, so I’m keen to test on more devices.<p>As the hacker, so the (ancient) painter.

Show HN: The Tomb of Ramesses I in the Valley of the Kings

Hey all, I’m coding up a new tour system for my 3d Egyptian work. After the last feedback, I focused on building more interactive content and fx into the guided tours with sound and telling the mythology in the wall art.<p>I’d love feedback with the new version – this is built with vanilla Three.js and footage captured on my iPhone 12. For various fx, I coded many of my own shaders based on work by <a href="https://twitter.com/akella" rel="nofollow noreferrer">https://twitter.com/akella</a> and others on ShaderToy, so I’m keen to test on more devices.<p>As the hacker, so the (ancient) painter.

Show HN: The Tomb of Ramesses I in the Valley of the Kings

Hey all, I’m coding up a new tour system for my 3d Egyptian work. After the last feedback, I focused on building more interactive content and fx into the guided tours with sound and telling the mythology in the wall art.<p>I’d love feedback with the new version – this is built with vanilla Three.js and footage captured on my iPhone 12. For various fx, I coded many of my own shaders based on work by <a href="https://twitter.com/akella" rel="nofollow noreferrer">https://twitter.com/akella</a> and others on ShaderToy, so I’m keen to test on more devices.<p>As the hacker, so the (ancient) painter.

Show HN: A JavaScript function that looks and behaves like a pipe operator

< 1 2 3 ... 427 428 429 430 431 ... 937 938 939 >