The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Gitlab Meeting Simulator 2024
Gitlab's meeting recordings on YouTube have tens of thousands of views by people pretending to work. Now you can appear to be in the meeting using your own webcam.
Show HN: Gitlab Meeting Simulator 2024
Gitlab's meeting recordings on YouTube have tens of thousands of views by people pretending to work. Now you can appear to be in the meeting using your own webcam.
Show HN: Aldi Price Map
Hi HN, Inspired by the recent discussion on traderjoesprices.com, and sites such as mccheapest.com, here is a map of how much does it cost to shop (this week's promo items) at Aldi
Show HN: FoldMation – Interactive origami learning and creation
Hi, I've created an application where you can follow step by step origami fold instructions, and a Creator where you can make these interactive folds.<p>On comparing to video instructions, you have the ability to quickly skip/rewind steps and replay a complicated step many times.<p>On the creation side, there have been one or two attempts at this before, but those solutions rely on mouse drags as the user interface. This greatly limited the kinds of folds possible. The foldMation Creator uses commands, keywords and values to compose a domain specific language/step and provides a (relatively speaking) easy to use user interface to compose the steps.<p>For those interested in using the Creator, please go through the tutorial at the top of the create page.<p>Btw, the DSL for foldMation uses <a href="https://github.com/mationai/mation-spec">https://github.com/mationai/mation-spec</a>. I created it since I couldn't find anything out there that is similar, allowing me to specify a well structured data with English-like readable syntax.<p>Let me know what you think?
Show HN: You don't need to adopt new tools for LLM observability
If you've built any web-based app in the last 15 years, you probably used something like Datadog, New Relic, Sentry, etc. to monitor and trace your app, right?<p>Why should it be different when the app you're building happens to be using LLMs?<p>So today we're open-sourcing OpenLLMetry-JS. It's an open protocol and SDK, based on OpenTelemetry, that provides traces and metrics for LLM JS/TS applications and can be connected to any of the 15+ tools that already support OpenTelemetry. Here's the repo: <a href="https://github.com/traceloop/openllmetry-js">https://github.com/traceloop/openllmetry-js</a><p>A few months ago we launched the python flavor here (<a href="https://news.ycombinator.com/item?id=37843907">https://news.ycombinator.com/item?id=37843907</a>) and we've now built a compatible one for Node.js.<p>Would love to hear your thoughts and opinions!<p>Check it out -<p>Docs: <a href="https://www.traceloop.com/docs/openllmetry/getting-started-ts">https://www.traceloop.com/docs/openllmetry/getting-started-t...</a><p>Github:
<a href="https://github.com/traceloop/openllmetry-js">https://github.com/traceloop/openllmetry-js</a>
<a href="https://github.com/traceloop/openllmetry">https://github.com/traceloop/openllmetry</a>
Show HN: Linen.team – A lightweight, thread-first Slack alternative
Hi HN! I'm Kam, the founder and one of the authors of Linen. Today, we are launching Linen.team (<a href="https://linen.team/" rel="nofollow">https://linen.team/</a>), a lightweight threaded messaging app for your team.<p>Modern workplace messaging apps (like Slack) are based on IRC, which is great for small groups, but as it scales, breaks down quickly: you either get overwhelmed by notifications or you have to turn them all off. Most chat apps have threads tacked on but aren't built from the ground up with this design in mind. We wanted to create a thread-first experience where you can organize and prioritize conversations so that you are not reliant on notifications to make sure you don’t miss anything.<p>In apps like Slack, you have to check activities, channels, threads, and replies just to make sure you aren't missing anything important. We designed every message in Linen to belong to a thread, so it makes it easy to centralize everything in a single location. We let you select which channels you subscribe to from your inbox. This way, your inbox only has the important channels. This makes it easy to keep track of conversations without having to rely on notifications to make sure you don’t miss anything.<p>We also wanted a better way to separate urgent vs. non-urgent communication. In Linen, we have introduced the concept of a !mention that is designed for urgent/time-sensitive messages. A !mention will send a push notification, whereas an @mention will show up in the person’s inbox. This allows us to encourage more async conversations and reduce the need for the number of push notifications. We also designed the mention system closely with the inbox so that even if you aren’t subscribed to channels, mentions will still appear in your inbox. This is great for joining partner teams where you don’t need to view every conversation but do need to respond when you are mentioned.<p>We believe that most messaging apps are secretly to-do lists in disguise; you have to read, respond, or do some task when you receive a thread. We wanted to give you the ability to manage threads individually. We let you mark each thread as done, which hides them from your inbox and is useful to keep track of tasks. You also can set reminders and mute threads with one click/key. With these features, we make it easy to get to a zero inbox state. This combined with the inbox makes it easy for you to keep track of conversations and make sure you don’t miss anything.<p>Linen is designed for power users. We love keyboard shortcuts and want an experience that is keyboard-first. For many, the messaging app is the app that is used the most. We believe that you should be able to use Linen for an entire day without touching the mouse. We’ve added modern features like CMD+K for navigation. We’ve designed Linen to be fast and lightweight. Our gzipped bundle size is 400KB, so it's fast on first load, and we’ve introduced multiple layers of caching to make sure things are fast on subsequent loads.<p>We’ve been working hard on this app for the past 6 months, so there are still gaps in the platform. But we’re also very excited about the direction we can take. Our focus is on what a modern message platform built in 2024 should look like and what lessons we can take from the previous decades of IRC and messaging apps. If our message resonates with you, we would love for you to give us a try at <a href="https://www.linen.team/signup?callbackUrl=/linen" rel="nofollow">https://www.linen.team/signup?callbackUrl=/linen</a>, where you can join our public community and come say hi!
Show HN: Linen.team – A lightweight, thread-first Slack alternative
Hi HN! I'm Kam, the founder and one of the authors of Linen. Today, we are launching Linen.team (<a href="https://linen.team/" rel="nofollow">https://linen.team/</a>), a lightweight threaded messaging app for your team.<p>Modern workplace messaging apps (like Slack) are based on IRC, which is great for small groups, but as it scales, breaks down quickly: you either get overwhelmed by notifications or you have to turn them all off. Most chat apps have threads tacked on but aren't built from the ground up with this design in mind. We wanted to create a thread-first experience where you can organize and prioritize conversations so that you are not reliant on notifications to make sure you don’t miss anything.<p>In apps like Slack, you have to check activities, channels, threads, and replies just to make sure you aren't missing anything important. We designed every message in Linen to belong to a thread, so it makes it easy to centralize everything in a single location. We let you select which channels you subscribe to from your inbox. This way, your inbox only has the important channels. This makes it easy to keep track of conversations without having to rely on notifications to make sure you don’t miss anything.<p>We also wanted a better way to separate urgent vs. non-urgent communication. In Linen, we have introduced the concept of a !mention that is designed for urgent/time-sensitive messages. A !mention will send a push notification, whereas an @mention will show up in the person’s inbox. This allows us to encourage more async conversations and reduce the need for the number of push notifications. We also designed the mention system closely with the inbox so that even if you aren’t subscribed to channels, mentions will still appear in your inbox. This is great for joining partner teams where you don’t need to view every conversation but do need to respond when you are mentioned.<p>We believe that most messaging apps are secretly to-do lists in disguise; you have to read, respond, or do some task when you receive a thread. We wanted to give you the ability to manage threads individually. We let you mark each thread as done, which hides them from your inbox and is useful to keep track of tasks. You also can set reminders and mute threads with one click/key. With these features, we make it easy to get to a zero inbox state. This combined with the inbox makes it easy for you to keep track of conversations and make sure you don’t miss anything.<p>Linen is designed for power users. We love keyboard shortcuts and want an experience that is keyboard-first. For many, the messaging app is the app that is used the most. We believe that you should be able to use Linen for an entire day without touching the mouse. We’ve added modern features like CMD+K for navigation. We’ve designed Linen to be fast and lightweight. Our gzipped bundle size is 400KB, so it's fast on first load, and we’ve introduced multiple layers of caching to make sure things are fast on subsequent loads.<p>We’ve been working hard on this app for the past 6 months, so there are still gaps in the platform. But we’re also very excited about the direction we can take. Our focus is on what a modern message platform built in 2024 should look like and what lessons we can take from the previous decades of IRC and messaging apps. If our message resonates with you, we would love for you to give us a try at <a href="https://www.linen.team/signup?callbackUrl=/linen" rel="nofollow">https://www.linen.team/signup?callbackUrl=/linen</a>, where you can join our public community and come say hi!
Show HN: Play the game I'm developing directly on its website
I've been working on Athena Crisis for about two years, and full time for the past 9 months. The game is entirely built from scratch using React and CSS without a game engine. It runs anywhere, including the Steam Deck. You can even use a gamepad on the landing page to play!<p>Previously the landing page had a video of the game but my goal was to always just put the actual game on the website. I merged the landing page into the game's monorepo, added the game's React components, and boom – the video was replaced with a playable version of Athena Crisis.<p>Of course, the real game has tons more features, but the landing page now always runs the exact same code as the actual game – including assets, the AI, and the UI/UX – and it is pushed within 5 minutes as the actual game is being updated live.<p>I frequently talk about the tech behind this game (see this React Summit talk about "How Not to Build a Video Game": <a href="https://www.youtube.com/watch?v=m8SmXOTM8Ec" rel="nofollow">https://www.youtube.com/watch?v=m8SmXOTM8Ec</a> ) and I'm planning on open sourcing as much as possible in the future.
Show HN: Play the game I'm developing directly on its website
I've been working on Athena Crisis for about two years, and full time for the past 9 months. The game is entirely built from scratch using React and CSS without a game engine. It runs anywhere, including the Steam Deck. You can even use a gamepad on the landing page to play!<p>Previously the landing page had a video of the game but my goal was to always just put the actual game on the website. I merged the landing page into the game's monorepo, added the game's React components, and boom – the video was replaced with a playable version of Athena Crisis.<p>Of course, the real game has tons more features, but the landing page now always runs the exact same code as the actual game – including assets, the AI, and the UI/UX – and it is pushed within 5 minutes as the actual game is being updated live.<p>I frequently talk about the tech behind this game (see this React Summit talk about "How Not to Build a Video Game": <a href="https://www.youtube.com/watch?v=m8SmXOTM8Ec" rel="nofollow">https://www.youtube.com/watch?v=m8SmXOTM8Ec</a> ) and I'm planning on open sourcing as much as possible in the future.
Show HN: Reor – An AI note-taking app that runs models locally
Reor is an open-source AI note-taking app that runs models locally.<p>The four main things to know are:<p>1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.<p>2. You can do RAG Q&A on your notes using the local LLM of your choice.<p>3. Embedding model, LLM, vector db and files are all run or stored locally.<p>4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.<p>Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.<p>Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.<p>It's available for Mac, Windows & Linux on the project Github: <a href="https://github.com/reorproject/reor">https://github.com/reorproject/reor</a>
Show HN: Reor – An AI note-taking app that runs models locally
Reor is an open-source AI note-taking app that runs models locally.<p>The four main things to know are:<p>1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.<p>2. You can do RAG Q&A on your notes using the local LLM of your choice.<p>3. Embedding model, LLM, vector db and files are all run or stored locally.<p>4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.<p>Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.<p>Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.<p>It's available for Mac, Windows & Linux on the project Github: <a href="https://github.com/reorproject/reor">https://github.com/reorproject/reor</a>
Show HN: Reor – An AI note-taking app that runs models locally
Reor is an open-source AI note-taking app that runs models locally.<p>The four main things to know are:<p>1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.<p>2. You can do RAG Q&A on your notes using the local LLM of your choice.<p>3. Embedding model, LLM, vector db and files are all run or stored locally.<p>4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.<p>Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.<p>Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.<p>It's available for Mac, Windows & Linux on the project Github: <a href="https://github.com/reorproject/reor">https://github.com/reorproject/reor</a>
Show HN: Reor – An AI note-taking app that runs models locally
Reor is an open-source AI note-taking app that runs models locally.<p>The four main things to know are:<p>1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.<p>2. You can do RAG Q&A on your notes using the local LLM of your choice.<p>3. Embedding model, LLM, vector db and files are all run or stored locally.<p>4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.<p>Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.<p>Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.<p>It's available for Mac, Windows & Linux on the project Github: <a href="https://github.com/reorproject/reor">https://github.com/reorproject/reor</a>
Show HN: Reor – An AI note-taking app that runs models locally
Reor is an open-source AI note-taking app that runs models locally.<p>The four main things to know are:<p>1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.<p>2. You can do RAG Q&A on your notes using the local LLM of your choice.<p>3. Embedding model, LLM, vector db and files are all run or stored locally.<p>4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.<p>Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.<p>Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.<p>It's available for Mac, Windows & Linux on the project Github: <a href="https://github.com/reorproject/reor">https://github.com/reorproject/reor</a>
Show HN: Swift Mail, a native macOS app for JMAP mail
Hello HN! I'm excited to share Swift Mail, a native macOS email client purpose-built for the JMAP mail standard.<p>Primarily constructed with SwiftUI with occasional AppKit elements, Swift Mail combines the speed and efficiency of a modern mail standard with desktop-centric features such as system notifications, keyboard shortcuts, quick look, multiple windows, state restoration, dark mode, and more.<p>Swift Mail distinguishes itself from other email clients with its steadfast commitment to the JMAP standard over the traditional IMAP implementation, facilitating seamless alignment with modern mail features. It supports various innovative Fastmail features, such as multiple sending identities, the ability to send or reply on-the-fly from wildcard (*) aliases, and the ability to swiftly transition between (true) label and folder organization schemes.<p>Swift Mail prioritizes user privacy and does not collect any user data or function through intermediary servers. Instead, it directly connects to the JMAP server with the user's provided account credentials, processing and storing all data locally on the user's device.<p>Currently, Swift Mail is available directly via the Mac App Store with support extending back to Monterey. I’m also running a developer build on visionOS (if you have hardware and are interested in testing a beta release, please reach out to beta at swiftmail dot io).<p>A sincere thank you to everyone who has contributed their valuable insights or participated in beta testing via TestFlight thus far.<p>Looking forward to your feedback!<p>- Karl
Show HN: Kubetail – Web-based real-time log viewer for Kubernetes
Hi Everyone!<p>Kubetail is a new project I've been working on. It's a private, real-time log viewer for Kubernetes clusters. You deploy it inside your cluster and access it via a web browser, like the Kubernetes Dashboard.<p>Using kubetail, you can view logs in real-time from multiple Workload containers simultaneously. For example, you can view all the logs from the Pod containers running in a Deployment and the UI will update automatically as the pods come into and out of existence. Kubetail uses your in-cluster Kubernetes API so your logs are always in your possession and it's private by default.<p>Currently you can filter logs based on node properties such as availability zone, CPU architecture or node ID and we have plans for a lot more features coming up.<p>Here's a live demo: <a href="https://www.kubetail.com/demo" rel="nofollow">https://www.kubetail.com/demo</a><p>Check it out and let me know what you think!<p>Andres
Show HN: Kubetail – Web-based real-time log viewer for Kubernetes
Hi Everyone!<p>Kubetail is a new project I've been working on. It's a private, real-time log viewer for Kubernetes clusters. You deploy it inside your cluster and access it via a web browser, like the Kubernetes Dashboard.<p>Using kubetail, you can view logs in real-time from multiple Workload containers simultaneously. For example, you can view all the logs from the Pod containers running in a Deployment and the UI will update automatically as the pods come into and out of existence. Kubetail uses your in-cluster Kubernetes API so your logs are always in your possession and it's private by default.<p>Currently you can filter logs based on node properties such as availability zone, CPU architecture or node ID and we have plans for a lot more features coming up.<p>Here's a live demo: <a href="https://www.kubetail.com/demo" rel="nofollow">https://www.kubetail.com/demo</a><p>Check it out and let me know what you think!<p>Andres
Show HN: Statusduck – Website monitoring tool where the data is public
Hello HN! We're excited to share Statusduck, an uptime monitoring tool where the data is publicly viewable.<p>There are an uncountable amount of uptime tools but we thought we'd put a fun spin on the concept by making another one where people can immediately set up a monitor without signing up. Just type in a website and our service will start pinging it every minute!
Show HN: Statusduck – Website monitoring tool where the data is public
Hello HN! We're excited to share Statusduck, an uptime monitoring tool where the data is publicly viewable.<p>There are an uncountable amount of uptime tools but we thought we'd put a fun spin on the concept by making another one where people can immediately set up a monitor without signing up. Just type in a website and our service will start pinging it every minute!
Show HN: Faster LLM evaluation with Bayesian optimization
Recently I've been working on making LLM evaluations fast by using bayesian optimization to select a sensible subset.
Bayesian optimization is used because it’s good for exploration / exploitation of expensive black box (paraphrase, LLM).
I would love to hear your thoughts and suggestions on this!