The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: Heynote – A dedicated scratchpad for developers

Hey!<p>I made Heynote entirely for my own use case. For many years, I always had an Emacs instance running with the scratch buffer open, even long after I had abandoned Emacs as my programming editor in favor of more recent IDE:s.<p>The simplicity of having just one big scratch buffer appeals to me, but I still want to separate the different things I jot down somehow (without using tabs or similar). Previously, my solution was to insert a bunch of blank lines between the notes, but hitting C-A would still select the entire buffer. That's why I came up with the concept of "blocks", which turned out really well for my use cases.<p>I decided to release Heynote, thinking it might be useful to others.

Show HN: Heynote – A dedicated scratchpad for developers

Hey!<p>I made Heynote entirely for my own use case. For many years, I always had an Emacs instance running with the scratch buffer open, even long after I had abandoned Emacs as my programming editor in favor of more recent IDE:s.<p>The simplicity of having just one big scratch buffer appeals to me, but I still want to separate the different things I jot down somehow (without using tabs or similar). Previously, my solution was to insert a bunch of blank lines between the notes, but hitting C-A would still select the entire buffer. That's why I came up with the concept of "blocks", which turned out really well for my use cases.<p>I decided to release Heynote, thinking it might be useful to others.

Show HN: Talk to any ArXiv paper just by changing the URL

Hello HN, Talk2Arxiv is a small open-source RAG application I've been building for a few weeks. To use it just prepend any arxiv.org link with 'talk2' to load the paper into a responsive RAG chat application (e.g. www.arxiv.org/abs/1706.03762 -> www.talk2arxiv.org/abs/1706.03762).<p>All implementation details are in the GitHub. Currently, because I've opted to extract text from the PDF of the paper rather than reading the LaTeX source code (since I wanted to build a more generic PDF RAG in the process), it struggles with symbolic text / mathematics, and sometimes fails to retrieve the correct context. I appreciate any feedback, and hope people find it useful!<p>Currently, the backend PDF processing server is only single-threaded so if embedding takes a while please be patient!

Show HN: I'm open-sourcing my game engine

Modd.io is a collaborative game editor that runs in browser. It's kind of like Figma for game dev.<p>We made this engine low-code and multiplayer-first, so developeres can quickly prototype casual multiplayer games.<p>I hope some of you guys will find this useful. Would love to hear feedback also. Thank you.<p>Engine Demo: <a href="https://www.modd.io" rel="nofollow noreferrer">https://www.modd.io</a>

Show HN: I'm open-sourcing my game engine

Modd.io is a collaborative game editor that runs in browser. It's kind of like Figma for game dev.<p>We made this engine low-code and multiplayer-first, so developeres can quickly prototype casual multiplayer games.<p>I hope some of you guys will find this useful. Would love to hear feedback also. Thank you.<p>Engine Demo: <a href="https://www.modd.io" rel="nofollow noreferrer">https://www.modd.io</a>

Show HN: I built an open source AI video search engine to learn more about AI

Hi there! When Supabase announced their recent hackathon, I thought it was a good time to build something to learn more about so many of the new AI models and tech out there. From the different techniques of embedding documents to the future RAG.<p>With the rise of short form content with TikTok and Youtube. A lot more knowledge is in videos than ever before. Finding specific answers within millions of videos can be difficult for any one person to go through. So the question is if there is Google that indexes text on website making it easier to find based on the context of on your question, why is there no Google that indexes video content making it easier for users to find answers within them.<p>So I built this to showcase that it's very much possible with the technology and infrastructure that is readily available.<p>I've indexed thousands of videos from Youtube and will be adding more, some of the things coming soon: - Index TikTok videos - Using whisper to transcribe audio in videos that don't have captions - Auto scraping both Youtube and Tiktok everyday to add new content<p>The tech stack: - Supbase (PostgreSQL, PG_Vector, Auth) - Hasura (GraphQL layer, permissions) - Fly (Hosting of Hasura) - JigsawStack (Summary AI, Chat AI) - Vercel (NextJS hosting, Serverless functions)<p>The code is opensource here:<a href="https://github.com/yoeven/ai-video-search-engine">https://github.com/yoeven/ai-video-search-engine</a><p>Would love some feedback and thoughts on what you would like to see?

Show HN: Microagents: Agents capable of self-editing their prompts / Python code

Show HN: NowDo – MacOS todo app for procrastinators

Show HN: Get any piece of Google Earth as a single normalized glTF 3D model

Google released an API in May to get fetch 3D Tiles of anywhere on Earth. Using this in standard 3D engines like Blender is tricky because (1) the tiles are in a geographic coordinate system (2) you get a lot of little tiles at varying quality levels<p>I wanted to simplify this so all you need to do is get an API key, select a map region and a zoom level, and get one combined glTF file that you can throw into any engine. Especially if you're just prototyping and want to see how this data looks in your engine before investing in figuring out all the nuances of the API & coordinate system.<p>(Note that the API prohibits offline use, as in you can't distribute a processed glTF file like this. But you can do this preprocessing in memory whenever you're fetching tiles).

Show HN: My Go SQLite driver did poorly on a benchmark, so I fixed it

<a href="https://github.com/ncruces/go-sqlite3">https://github.com/ncruces/go-sqlite3</a> was doing poorly on this benchmark that was posted yesterday to HackerNews [1].<p>With the help of some pprof, I was able to trace it to a serious performance regression introduced two weeks ago, and come up with the fix (happy to field questions, if you're interested in the nitty gritty).<p>It's not the fastest driver around, but it's no longer the slowest: comfortably middle of the pack. It's based on a WASM build of SQLite, and thanks to <a href="https://wazero.io" rel="nofollow noreferrer">https://wazero.io</a> doesn't need CGO.<p>[1]: <a href="https://news.ycombinator.com/item?id=38626698">https://news.ycombinator.com/item?id=38626698</a>

Show HN: My Go SQLite driver did poorly on a benchmark, so I fixed it

<a href="https://github.com/ncruces/go-sqlite3">https://github.com/ncruces/go-sqlite3</a> was doing poorly on this benchmark that was posted yesterday to HackerNews [1].<p>With the help of some pprof, I was able to trace it to a serious performance regression introduced two weeks ago, and come up with the fix (happy to field questions, if you're interested in the nitty gritty).<p>It's not the fastest driver around, but it's no longer the slowest: comfortably middle of the pack. It's based on a WASM build of SQLite, and thanks to <a href="https://wazero.io" rel="nofollow noreferrer">https://wazero.io</a> doesn't need CGO.<p>[1]: <a href="https://news.ycombinator.com/item?id=38626698">https://news.ycombinator.com/item?id=38626698</a>

Show HN: Airdraw

Hey Everyone! I wanted to show my passion project I made a few years back and recently turned into a web app.<p>Airdraw is an app that takes in your hand gestures and converts it into real time drawing capabilities.<p>There are tons of stuff I want to build with this, but most importantly just put it out into the world! Hope you all enjoy :)<p>Tested on my MacOS with Chrome, Safari, and FF<p>I included a link to the original GH project I wrote here - <a href="https://github.com/arefmalek/airdraw">https://github.com/arefmalek/airdraw</a> and a link to the blog where I explained how I made it here - <a href="https://arefmalek.github.io/blog/Airdraw/" rel="nofollow noreferrer">https://arefmalek.github.io/blog/Airdraw/</a><p>If anyone has issues with loading, try again with this link - <a href="https://web-draw-e58vy7q9m-arefmalek.vercel.app/" rel="nofollow noreferrer">https://web-draw-e58vy7q9m-arefmalek.vercel.app/</a>. AFAIK, IOS livestream doesn't allow a canvas overlay so you would be able to draw, but not see anything until you exit the livestream. Hope someone has a sol for that ¯\_(ツ)_/¯

Show HN: I scraped 25M Shopify products to build a search engine

Hi HN! I built Agora as a side-project leading up to the holiday season. I wanted to find an easier way to find Christmas gifts, without needing to go store-by-store.<p>My wife asked me for a a pair of red shoes for Christmas. I quickly typed it into Google and found a combination of ads from large retailers and links to a 1948 movie called 'Red Shoes'. I decided to build Agora to solve my own problem (and stay happily married). The product is a search engine that automatically crawls thousands of Shopify stores and makes them easily accessible with a search interface. There's a few additional features to enhance the buying experience including saving products, filters, reviews, and popular products.<p>I've started with exclusively Shopify stores and plan to expand the crawler to other e-commerce platforms like BigCommerce, WooCommerce, Wix, etc. The technical challenge I've found is keeping the search speed and performance strong as the data set becomes larger. There's about 25 million products on Agora right now. I'll ramp this up carefully to make sure we don't compromise the search speed and user experience.<p>I'd love any feedback!

I benchmarked six Go SQLite drivers

Show HN: Advent of Distributed Systems

Hey! I built a playground called Advent of Distributed Systems (<a href="https://aods.cryingpotato.com/" rel="nofollow noreferrer">https://aods.cryingpotato.com/</a>) where you can work through the Fly.io distributed systems challenges (<a href="https://fly.io/dist-sys/1/">https://fly.io/dist-sys/1/</a>) directly in your browser. Running challenges like this directly in the browser has often been the best way for me to get the activation energy to start them since it bypasses all the annoying dev environment setup that has to happen as a precursor to working on it.<p>The coding environment was built with another project I'm working on called Cannon (<a href="https://cannon.cryingpotato.com/" rel="nofollow noreferrer">https://cannon.cryingpotato.com/</a>) that aims to let you embed codeblocks of any language in your browser. Right now the Go environment runs on a Modal backend using their sandbox, but I'm hoping to use the excellent work done on Hackpad (<a href="https://github.com/hack-pad/hackpad/tree/main">https://github.com/hack-pad/hackpad/tree/main</a>) to run the whole thing in your browser, with no network calls necessary, soon.<p>Let me know what you think - week 3 is coming out soon!

Show HN: Visualize rotating objects from the 4th, 5th, nth dimensions

Ever since I remember I had a lot of curiosity regarding hyper dimensional spaces. Picturing higher dimensions, such an impossible yet exciting idea... So years ago I came across a small GIF of a tesseract. Since then it left me wondering how cubes from even higher dimensions would look like... Years passed and I became a software developer, decided to tackle the problem myself and ncube was the result.<p>ncube allows you to visualize rotating hypercubes of arbitrary dimensions. It works by rotating the hyperdimensional vertices and applying a chain of perspective projections to them until the 3rd dimension is reached. Everything is generated in real time just from the dimension number.<p>The application is fully free and open source: <a href="https://github.com/ndavd/ncube">https://github.com/ndavd/ncube</a>. There, you'll find some demos, more detailed explanation and how you can test it out yourself. Binaries for Windows, Mac and Linux are available: <a href="https://github.com/ndavd/ncube/releases/latest">https://github.com/ndavd/ncube/releases/latest</a> There's also a web version that runs fully on the browser: <a href="https://ncube.ndavd.com" rel="nofollow noreferrer">https://ncube.ndavd.com</a><p>If you like the project I'd appreciate if you could give it a star on GitHub ♥ If you have any issue or feature request please submit at <a href="https://github.com/ndavd/ncube/issues">https://github.com/ndavd/ncube/issues</a><p>-- EDIT DEC13 2023: IMPORTANT NOTICE --<p><a href="https://ncube.ndavd.com" rel="nofollow noreferrer">https://ncube.ndavd.com</a> MIGHT BE DOWN FOR YOU.<p>I deeply appreciate all the attention and feedback that ncube has been having this last day.<p>Alas it has become too big for the free hosting that I'm currently using and has exceeded my available monthly bandwidth.<p>I will be fixing it soon. In the meantime I recommend using the native binaries from <a href="https://github.com/ndavd/ncube">https://github.com/ndavd/ncube</a><p>UPDATE: <a href="https://ncube.ndavd.com" rel="nofollow noreferrer">https://ncube.ndavd.com</a> IS BACK ONLINE and with a performance boost. Should load faster now.<p>-- EDIT DEC17 2023 --<p>If ncube is leaving your system unresponsive it should be due to a recent Chromium issue with hardware acceleration affecting at least all Chromium-based web browsers on Linux. In case you're on Brave downgrading to 1.60 fixes it.<p>- 0xndavd

Show HN: A pure C89 implementation of Go channels, with blocking selects

Show HN: Open-source macOS AI copilot using vision and voice

Heeey! I built a macOS copilot that has been useful to me, so I open sourced it in case others would find it useful too.<p>It's pretty simple:<p>- Use a keyboard shortcut to take a screenshot of your active macOS window and start recording the microphone.<p>- Speak your question, then press the keyboard shortcut again to send your question + screenshot off to OpenAI Vision<p>- The Vision response is presented in-context/overlayed over the active window, and spoken to you as audio.<p>- The app keeps running in the background, only taking a screenshot/listening when activated by keyboard shortcut.<p>It's built with NodeJS/Electron, and uses OpenAI Whisper, Vision and TTS APIs under the hood (BYO API key).<p>There's a simple demo and a longer walk-through in the GH readme <a href="https://github.com/elfvingralf/macOSpilot-ai-assistant">https://github.com/elfvingralf/macOSpilot-ai-assistant</a>, and I also posted a different demo on Twitter: <a href="https://twitter.com/ralfelfving/status/1732044723630805212" rel="nofollow noreferrer">https://twitter.com/ralfelfving/status/1732044723630805212</a>

Show HN: I built an OSS alternative to Azure OpenAI services

Hey HN, I am proud to show you guys that I have built an open source alternative to Azure OpenAI services.<p>Azure OpenAI services was born out of companies needing enhanced security and access control for using different GPT models. I want to build an OSS version of Azure OpenAI services that people could self host in their own infrastructure.<p>"How can I track LLM spend per API key?"<p>"Can I create a development OpenAI API key with limited access for Bob?"<p>"Can I see my LLM spend breakdown by models and endpoints?"<p>"Can I create 100 OpenAI API keys that my students could use in a classroom setting?"<p>These are questions that BricksLLM helps you answer.<p>BricksLLM is an API gateway that let you create API keys with rate limit, cost control and ttl that could be used to access all OpenAI and Anthropic endpoints with out of box analytics.<p>When I first started building with OpenAI APIs, I was constantly worried about API keys being comprised since vanilla OpenAI API keys would grant you unlimited access to all of their models. There are stories of people losing thousands of dollars and the existence of a black market for stolen OpenAI API keys.<p>This is why I started building a proxy for ourselves that allows for the creation of API keys with rate limits and cost controls. I built BricksLLM in Go since that was the language I used to build performative ads exchanges that scaled to thousands of requests per second at my previous job. A lot of developer tools in LLM ops are built with Python which I believe might be suboptimal in terms of performance and compute resource efficiency.<p>One of the challenges building this platform is to get accurate token counts for different OpenAI and Anthropic models. LLM providers are not exactly transparent with the way how they count prompt and completion tokens. In addition to user input, OpenAI and Anthropic pad prompt inputs with additional instructions or phrases that contribute to the final token counts. For example, Anthropic's actual completion token consumption is consistently 4 more than the token count of the completion output.<p>The latency of the gateway hovers around 50ms. Half of the latency comes from the tokenizer. If I start utilizing Go routines, might be able to lower the latency of the gateway to 30ms.<p>BricksLLM is not an observability platform, but we do provide integration with Datadog so you can get more insights regarding what is going on inside the proxy. Compared to other tools in the LLMOps space, I believe that BricksLLM has the most comprehensive features when it comes to access control.<p>Let me know what you guys think.

Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real

< 1 2 3 ... 27 28 29 30 31 ... 122 123 124 >