The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I automated half of my typing
I've been using this for about a year now - I parsed 6 months of my messages on slack and found the most common phrases I use and generated keyboard shortcuts for them.
Show HN: I automated half of my typing
I've been using this for about a year now - I parsed 6 months of my messages on slack and found the most common phrases I use and generated keyboard shortcuts for them.
Show HN: I automated half of my typing
I've been using this for about a year now - I parsed 6 months of my messages on slack and found the most common phrases I use and generated keyboard shortcuts for them.
Show HN: Why AI data should be self hosted
Show HN: Mu – A Micro App Platform
Hey all<p>Sharing a new piece of work I've been doing with a friend. Mu is a new micro web app platform which enables building and sharing apps instantly with storage, auth and payments built in. Apps are single file, built in the browser and rendered as an iframe. They're "micro" because they're quite literally tiny single purpose utilities like a hackernews reader or old school guest book. It's mostly at this point something that scratches a personal itch. Making app development super simple and lightweight. Sort of like living GitHub gists. And trying to build a simpler, cleaner place to consume the web. Right now nothing more than a cool hack I'm sharing. Feedback obviously welcome.<p>Cheers
Asim
Show HN: Mu – A Micro App Platform
Hey all<p>Sharing a new piece of work I've been doing with a friend. Mu is a new micro web app platform which enables building and sharing apps instantly with storage, auth and payments built in. Apps are single file, built in the browser and rendered as an iframe. They're "micro" because they're quite literally tiny single purpose utilities like a hackernews reader or old school guest book. It's mostly at this point something that scratches a personal itch. Making app development super simple and lightweight. Sort of like living GitHub gists. And trying to build a simpler, cleaner place to consume the web. Right now nothing more than a cool hack I'm sharing. Feedback obviously welcome.<p>Cheers
Asim
Show HN: Mu – A Micro App Platform
Hey all<p>Sharing a new piece of work I've been doing with a friend. Mu is a new micro web app platform which enables building and sharing apps instantly with storage, auth and payments built in. Apps are single file, built in the browser and rendered as an iframe. They're "micro" because they're quite literally tiny single purpose utilities like a hackernews reader or old school guest book. It's mostly at this point something that scratches a personal itch. Making app development super simple and lightweight. Sort of like living GitHub gists. And trying to build a simpler, cleaner place to consume the web. Right now nothing more than a cool hack I'm sharing. Feedback obviously welcome.<p>Cheers
Asim
Show HN: Advanced Tab Manager for Firefox
Show HN: Advanced Tab Manager for Firefox
Show HN: Advanced Tab Manager for Firefox
Show HN: Langfuse – Open-source observability and analytics for LLM apps
Hi HN! Langfuse is OSS observability and analytics for LLM applications (repo: <a href="https://github.com/langfuse/langfuse">https://github.com/langfuse/langfuse</a>, 2 min demo: <a href="https://langfuse.com/video">https://langfuse.com/video</a>, try it yourself: <a href="https://langfuse.com/demo">https://langfuse.com/demo</a>)<p>Langfuse makes capturing and viewing LLM calls (execution traces) a breeze. On top of this data, you can analyze the quality, cost and latency of LLM apps.<p>When GPT-4 dropped, we started building LLM apps – a lot of them! [1, 2] But they all suffered from the same issue: it’s hard to assure quality in 100% of cases and even to have a clear view of user behavior. Initially, we logged all prompts/completions to our production database to understand what works and what doesn’t. We soon realized we needed more context, more data and better analytics to sustainably improve our apps. So we started building a homegrown tool.<p>Our first task was to track and view what is going on in production: what user input is provided, how prompt templates or vector db requests work, and which steps of an LLM chain fail. We built async SDKs and a slick frontend to render chains in a nested way. It’s a good way to look at LLM logic ‘natively’. Then we added some basic analytics to understand token usage and quality over time for the entire project or single users (pre-built dashboards).<p>Under the hood, we use the T3 stack (Typescript, NextJs, Prisma, tRPC, Tailwind, NextAuth), which allows us to move fast + it means it's easy to contribute to our repo. The SDKs are heavily influenced by the design of the PostHog SDKs [3] for stable implementations of async network requests. It was a surprisingly inconvenient experience to convert OpenAPI specs to boilerplate Python code and we ended up using Fern [4] here. We’re fans of Tailwind + shadcn/ui + tremor.so for speed and flexibility in building tables and dashboards fast.<p>Our SDKs run fully asynchronously and make network requests in the background. We did our best to reduce any impact on application performance to a minimum. We never block the main execution path.<p>We've made two engineering decisions we've felt uncertain about: to use a Postgres database and Looker Studio for the analytics MVP. Supabase performs well at our scale and integrates seamlessly into our tech stack. We will need to move to an OLAP database soon and are debating if we need to start batching ingestion and if we can keep using Vercel. Any experience you could share would be helpful!<p>Integrating Looker Studio got us to first analytics charts in half a day. As it is not open-source and does not work with our UI/UX, we are looking to switch it out for an OSS solution to flexibly generate charts and dashboards. We’ve had a look at Lightdash and would be happy to hear your thoughts.<p>We’re borrowing our OSS business model from Posthog/Supabase who make it easy to self-host with features reserved for enterprise (no plans yet) and a paid version for managed cloud service. Right now all of our code is available under a permissive license (MIT).<p>Next, we’re going deep on analytics. For quality specifically, we will build out model-based evaluations and labeling to be able to cluster traces by scores and use cases.<p>Looking forward to hearing your thoughts and discussion – we’ll be in the comments. Thanks!<p>[1] <a href="https://learn-from-ai.com/" rel="nofollow noreferrer">https://learn-from-ai.com/</a><p>[2] <a href="https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba" rel="nofollow noreferrer">https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba</a><p>[3] <a href="https://posthog.com/docs/libraries">https://posthog.com/docs/libraries</a><p>[4] <a href="https://buildwithfern.com/">https://buildwithfern.com/</a>
Show HN: Langfuse – Open-source observability and analytics for LLM apps
Hi HN! Langfuse is OSS observability and analytics for LLM applications (repo: <a href="https://github.com/langfuse/langfuse">https://github.com/langfuse/langfuse</a>, 2 min demo: <a href="https://langfuse.com/video">https://langfuse.com/video</a>, try it yourself: <a href="https://langfuse.com/demo">https://langfuse.com/demo</a>)<p>Langfuse makes capturing and viewing LLM calls (execution traces) a breeze. On top of this data, you can analyze the quality, cost and latency of LLM apps.<p>When GPT-4 dropped, we started building LLM apps – a lot of them! [1, 2] But they all suffered from the same issue: it’s hard to assure quality in 100% of cases and even to have a clear view of user behavior. Initially, we logged all prompts/completions to our production database to understand what works and what doesn’t. We soon realized we needed more context, more data and better analytics to sustainably improve our apps. So we started building a homegrown tool.<p>Our first task was to track and view what is going on in production: what user input is provided, how prompt templates or vector db requests work, and which steps of an LLM chain fail. We built async SDKs and a slick frontend to render chains in a nested way. It’s a good way to look at LLM logic ‘natively’. Then we added some basic analytics to understand token usage and quality over time for the entire project or single users (pre-built dashboards).<p>Under the hood, we use the T3 stack (Typescript, NextJs, Prisma, tRPC, Tailwind, NextAuth), which allows us to move fast + it means it's easy to contribute to our repo. The SDKs are heavily influenced by the design of the PostHog SDKs [3] for stable implementations of async network requests. It was a surprisingly inconvenient experience to convert OpenAPI specs to boilerplate Python code and we ended up using Fern [4] here. We’re fans of Tailwind + shadcn/ui + tremor.so for speed and flexibility in building tables and dashboards fast.<p>Our SDKs run fully asynchronously and make network requests in the background. We did our best to reduce any impact on application performance to a minimum. We never block the main execution path.<p>We've made two engineering decisions we've felt uncertain about: to use a Postgres database and Looker Studio for the analytics MVP. Supabase performs well at our scale and integrates seamlessly into our tech stack. We will need to move to an OLAP database soon and are debating if we need to start batching ingestion and if we can keep using Vercel. Any experience you could share would be helpful!<p>Integrating Looker Studio got us to first analytics charts in half a day. As it is not open-source and does not work with our UI/UX, we are looking to switch it out for an OSS solution to flexibly generate charts and dashboards. We’ve had a look at Lightdash and would be happy to hear your thoughts.<p>We’re borrowing our OSS business model from Posthog/Supabase who make it easy to self-host with features reserved for enterprise (no plans yet) and a paid version for managed cloud service. Right now all of our code is available under a permissive license (MIT).<p>Next, we’re going deep on analytics. For quality specifically, we will build out model-based evaluations and labeling to be able to cluster traces by scores and use cases.<p>Looking forward to hearing your thoughts and discussion – we’ll be in the comments. Thanks!<p>[1] <a href="https://learn-from-ai.com/" rel="nofollow noreferrer">https://learn-from-ai.com/</a><p>[2] <a href="https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba" rel="nofollow noreferrer">https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba</a><p>[3] <a href="https://posthog.com/docs/libraries">https://posthog.com/docs/libraries</a><p>[4] <a href="https://buildwithfern.com/">https://buildwithfern.com/</a>
Show HN: Langfuse – Open-source observability and analytics for LLM apps
Hi HN! Langfuse is OSS observability and analytics for LLM applications (repo: <a href="https://github.com/langfuse/langfuse">https://github.com/langfuse/langfuse</a>, 2 min demo: <a href="https://langfuse.com/video">https://langfuse.com/video</a>, try it yourself: <a href="https://langfuse.com/demo">https://langfuse.com/demo</a>)<p>Langfuse makes capturing and viewing LLM calls (execution traces) a breeze. On top of this data, you can analyze the quality, cost and latency of LLM apps.<p>When GPT-4 dropped, we started building LLM apps – a lot of them! [1, 2] But they all suffered from the same issue: it’s hard to assure quality in 100% of cases and even to have a clear view of user behavior. Initially, we logged all prompts/completions to our production database to understand what works and what doesn’t. We soon realized we needed more context, more data and better analytics to sustainably improve our apps. So we started building a homegrown tool.<p>Our first task was to track and view what is going on in production: what user input is provided, how prompt templates or vector db requests work, and which steps of an LLM chain fail. We built async SDKs and a slick frontend to render chains in a nested way. It’s a good way to look at LLM logic ‘natively’. Then we added some basic analytics to understand token usage and quality over time for the entire project or single users (pre-built dashboards).<p>Under the hood, we use the T3 stack (Typescript, NextJs, Prisma, tRPC, Tailwind, NextAuth), which allows us to move fast + it means it's easy to contribute to our repo. The SDKs are heavily influenced by the design of the PostHog SDKs [3] for stable implementations of async network requests. It was a surprisingly inconvenient experience to convert OpenAPI specs to boilerplate Python code and we ended up using Fern [4] here. We’re fans of Tailwind + shadcn/ui + tremor.so for speed and flexibility in building tables and dashboards fast.<p>Our SDKs run fully asynchronously and make network requests in the background. We did our best to reduce any impact on application performance to a minimum. We never block the main execution path.<p>We've made two engineering decisions we've felt uncertain about: to use a Postgres database and Looker Studio for the analytics MVP. Supabase performs well at our scale and integrates seamlessly into our tech stack. We will need to move to an OLAP database soon and are debating if we need to start batching ingestion and if we can keep using Vercel. Any experience you could share would be helpful!<p>Integrating Looker Studio got us to first analytics charts in half a day. As it is not open-source and does not work with our UI/UX, we are looking to switch it out for an OSS solution to flexibly generate charts and dashboards. We’ve had a look at Lightdash and would be happy to hear your thoughts.<p>We’re borrowing our OSS business model from Posthog/Supabase who make it easy to self-host with features reserved for enterprise (no plans yet) and a paid version for managed cloud service. Right now all of our code is available under a permissive license (MIT).<p>Next, we’re going deep on analytics. For quality specifically, we will build out model-based evaluations and labeling to be able to cluster traces by scores and use cases.<p>Looking forward to hearing your thoughts and discussion – we’ll be in the comments. Thanks!<p>[1] <a href="https://learn-from-ai.com/" rel="nofollow noreferrer">https://learn-from-ai.com/</a><p>[2] <a href="https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba" rel="nofollow noreferrer">https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba</a><p>[3] <a href="https://posthog.com/docs/libraries">https://posthog.com/docs/libraries</a><p>[4] <a href="https://buildwithfern.com/">https://buildwithfern.com/</a>
Show HN: Customizable terminal UI for monitoring weather, app uptime, and more
Turning websites into animated videos
Would love to try the tool on new apps, feel free to share your website, and we'll make a new video for you
Show HN: I built a website that lets you read classic books as email newsletters
Show HN: I built a website that lets you read classic books as email newsletters
Show HN: MoodMinder – Swift Anger Regulation for Better Emotional Well-Being
Hey Hacker News community! We're excited to showcase MoodMinder, a mental health app MVP that empowers individuals to swiftly regulate anger and enhance emotional well-being. MoodMinder was born out of a desire to provide quick anger regulation solutions for busy individuals.<p>Unique Features:<p>Rapid Mood Identification: Identify anger triggers and tension levels swiftly.<p>Instant Personalized meditations: Receive tailored meditations for immediate anger control.<p>Cognitive Reappraisal: Shift perspectives to defuse triggers in real-time.<p>Quick Interactive Games: Engage in games designed for anger regulation in just minutes.<p>As an MVP, we're seeking insights from the Hacker News community to shape our app's development. Your feedback is pivotal.
Thank you for being part of our journey!
Show HN: MuscleWiki Advanced Bodymap – A More Granular Exercise Finder
We are currently working on moving our site over to a react front end and adding more features to the site. I wanted to highlight one we are quite proud of - The advanced bodymap. Click the advanced button and get specific exercises that work that muscle. We are hoping to have an even bigger breakdown in the future when we build our "recovery" section of the site.<p>Happy to answer any questions and take feature requests.
Show HN: MuscleWiki Advanced Bodymap – A More Granular Exercise Finder
We are currently working on moving our site over to a react front end and adding more features to the site. I wanted to highlight one we are quite proud of - The advanced bodymap. Click the advanced button and get specific exercises that work that muscle. We are hoping to have an even bigger breakdown in the future when we build our "recovery" section of the site.<p>Happy to answer any questions and take feature requests.