The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Anchor – developer-friendly private CAs for internal TLS
Hi HN! I'm Ben, co-founder of Anchor (<a href="https://anchor.dev/" rel="nofollow noreferrer">https://anchor.dev/</a>). Anchor is a hosted service for ACME powered internal X.509 CAs. We recently launched our features & tooling for local development. The goal is to make it easy and toil-free to develop locally with HTTPS, and also provide dev/prod parity for TLS/HTTPS encryption.<p>You can add Anchor to your development workflow in minutes. Here's how:<p>- <a href="https://blog.anchor.dev/getting-started-with-anchor-for-local-development-6dd2cd605c08" rel="nofollow noreferrer">https://blog.anchor.dev/getting-started-with-anchor-for-loca...</a><p>- <a href="https://blog.anchor.dev/service-to-service-tls-in-development-d0df479d67ce" rel="nofollow noreferrer">https://blog.anchor.dev/service-to-service-tls-in-developmen...</a><p>We started Anchor because private CAs were a constant source of frustration throughout our careers. Avoiding them makes it all the more painful when you're finally forced to use one. The release of ACME and Let's Encrypt was a big step forward in certificate provisioning, but the improvements have been almost entirely in the WebPKI and public CA space. Internal TLS is still as unpleasant & painful to use as it has been for the past 20 years. So we've built Anchor to be a developer-friendly way to setup internal TLS that fully leverages the benefits of ACME:<p>- no encryption experience or X.509 knowledge required<p>- automatically generated system and language packages to manage client trust stores<p>- ACME (RFC 8555) compliant API, broad language/tooling support for cert provisioning<p>- fully hosted, no services or infra requirements<p>- works the same in all deployment environments, including development<p>If you're interested in more specific details and strategy, our blog posts cover all this and more: <a href="https://blog.anchor.dev/" rel="nofollow noreferrer">https://blog.anchor.dev/</a><p>We are asking for feedback on our features for local development, and would like to hear your thoughts & questions. Many thanks!
Show HN: Halloween game to show off my new Terminal
Hi hi,<p>I made a little Halloween themed game this week to show off my new terminal. You can play it in the browser. at <a href="https://joel.tools/halloween/" rel="nofollow noreferrer">https://joel.tools/halloween/</a><p>Two years ago, I started building an experiment tracking tool for myself as I have been venturing into ai research, and it has morphed into a terminal with rich content support and a bunch of fancy features. <a href="https://github.com/JoelEinbinder/snail/">https://github.com/JoelEinbinder/snail/</a><p>I can't recommend people start using snail for important stuff today, but I think the game is fun and it shows off a lot of the capabilities of the terminal.<p>Joel Einbinder
Show HN: Streamdal – an open-source tail -f for your data
Hey there! This is Dan and Ustin (@uzarubin), and we want to share something cool we've been working on for the past year - an open-source <i>`tail -f`</i> for your data, with a UI. We call it <i>"Streamdal"</i> which is a word salad for streaming systems (because we love them) and DAL or data access layer (because we’re nerds).<p>Here's the repo: <a href="https://github.com/streamdal/streamdal">https://github.com/streamdal/streamdal</a><p>Here's the site: <a href="https://streamdal.com">https://streamdal.com</a><p>And here's a live demo: <a href="https://demo.streamdal.com">https://demo.streamdal.com</a> (github repo has an explanation of the demo)<p>— — —<p>THE PROBLEM<p>We built this because the current observability tooling is not able to provide real-time insight into the actual data that your software is reading or writing. Meaning that it takes longer to identify issues and longer to resolve them. That’s time, money, and customer satisfaction at stake.<p>Want to build something in-house? Prepare to deploy a team, spend months of development time, and tons of money bringing it to production. Then be ready to have engineers around to babysit your new monitoring tool instead of working on your product.<p>— — —<p>THE BASIC FLOW<p>So, wtf is a “tail -f for your data”. What we mean is this:<p>1. We give you an SDK for your language, a server, and a UI.<p>2. You instrument your code with <i>`StreamdalSDK.Process(yourData)`</i> anytime you read or write data in your app.<p>3. You deploy your app/service.<p>4. Go to the provided UI (or run the CLI app) and be able to peek into what your app is reading or writing, like with <i>`tail -f`</i>.<p>And that's basically it. There's a bunch more functionality in the project but we find this to be the most immediately useful part. Every developer we've shown this to has said "I wish I had this at my gig at $company" - and we feel exactly the same. We are devs and this is what we’ve always wanted, hundreds of times - a way to just quickly look at the data our software is producing in real-time, without having to jump through any hoops.<p><i>If you want to learn more about the "why" and the origin of this project - you can read about it here: <a href="https://streamdal.com/manifesto">https://streamdal.com/manifesto</a></i><p>— — —<p>HOW DOES IT WORK?<p>The SDK establishes a long-running session with the server (using gRPC) and "listens" for commands that are forwarded to it all the way from the <i>UI -> server -> SDK</i>.<p>The commands are things like: <i>"show me the data that you are currently consuming"</i>, <i>"apply these rules to all data that you produce"</i>, <i>"inspect the schema for all data"</i>, and so on.<p>The SDK interprets the command and either executes Wasm-based rules against the data it's processing or if it's a <i>`tail`</i> request - it'll send the data to the server, which will forward it to the UI for display.<p>The SDK <i>IS</i> part of the critical path but it does not have a dependency on the server. If the server is gone, you won't be able to use the UI or send commands to the SDKs, but that's about it - the SDKs will continue to work and attempt to reconnect to the server behind the scenes.<p>— — —<p>TECHNICAL BITS<p>The project consists of a lot of "buzzwordy" tech: we use gRPC, grpc-Web, protobuf, redis, Wasm, Deno, ReactFlow, and probably a few other things.<p>The server is written in Go, all of the Wasm is Rust and the UI is Typescript. There are SDKs for Go, Python, and Node. We chose these languages for the SDKs because we've been working in them daily for the past 10+ years.<p><i>The reasons for the tech choices are explained in detail here:</i> <a href="https://docs.streamdal.com/en/resources-support/open-source/">https://docs.streamdal.com/en/resources-support/open-source/</a><p>— — —<p>LAST PART<p>OK, that's it. What do you think? Is it useful? Can we answer anything?<p>- If you like what you're seeing, give our repo a star: <a href="https://github.com/streamdal/streamdal">https://github.com/streamdal/streamdal</a><p>- And If you <i>really</i> like what you're seeing, come talk to us on our discord: <a href="https://discord.gg/streamdal" rel="nofollow noreferrer">https://discord.gg/streamdal</a><p>Talk soon!<p>- Daniel & Ustin
Show HN: Streamdal – an open-source tail -f for your data
Hey there! This is Dan and Ustin (@uzarubin), and we want to share something cool we've been working on for the past year - an open-source <i>`tail -f`</i> for your data, with a UI. We call it <i>"Streamdal"</i> which is a word salad for streaming systems (because we love them) and DAL or data access layer (because we’re nerds).<p>Here's the repo: <a href="https://github.com/streamdal/streamdal">https://github.com/streamdal/streamdal</a><p>Here's the site: <a href="https://streamdal.com">https://streamdal.com</a><p>And here's a live demo: <a href="https://demo.streamdal.com">https://demo.streamdal.com</a> (github repo has an explanation of the demo)<p>— — —<p>THE PROBLEM<p>We built this because the current observability tooling is not able to provide real-time insight into the actual data that your software is reading or writing. Meaning that it takes longer to identify issues and longer to resolve them. That’s time, money, and customer satisfaction at stake.<p>Want to build something in-house? Prepare to deploy a team, spend months of development time, and tons of money bringing it to production. Then be ready to have engineers around to babysit your new monitoring tool instead of working on your product.<p>— — —<p>THE BASIC FLOW<p>So, wtf is a “tail -f for your data”. What we mean is this:<p>1. We give you an SDK for your language, a server, and a UI.<p>2. You instrument your code with <i>`StreamdalSDK.Process(yourData)`</i> anytime you read or write data in your app.<p>3. You deploy your app/service.<p>4. Go to the provided UI (or run the CLI app) and be able to peek into what your app is reading or writing, like with <i>`tail -f`</i>.<p>And that's basically it. There's a bunch more functionality in the project but we find this to be the most immediately useful part. Every developer we've shown this to has said "I wish I had this at my gig at $company" - and we feel exactly the same. We are devs and this is what we’ve always wanted, hundreds of times - a way to just quickly look at the data our software is producing in real-time, without having to jump through any hoops.<p><i>If you want to learn more about the "why" and the origin of this project - you can read about it here: <a href="https://streamdal.com/manifesto">https://streamdal.com/manifesto</a></i><p>— — —<p>HOW DOES IT WORK?<p>The SDK establishes a long-running session with the server (using gRPC) and "listens" for commands that are forwarded to it all the way from the <i>UI -> server -> SDK</i>.<p>The commands are things like: <i>"show me the data that you are currently consuming"</i>, <i>"apply these rules to all data that you produce"</i>, <i>"inspect the schema for all data"</i>, and so on.<p>The SDK interprets the command and either executes Wasm-based rules against the data it's processing or if it's a <i>`tail`</i> request - it'll send the data to the server, which will forward it to the UI for display.<p>The SDK <i>IS</i> part of the critical path but it does not have a dependency on the server. If the server is gone, you won't be able to use the UI or send commands to the SDKs, but that's about it - the SDKs will continue to work and attempt to reconnect to the server behind the scenes.<p>— — —<p>TECHNICAL BITS<p>The project consists of a lot of "buzzwordy" tech: we use gRPC, grpc-Web, protobuf, redis, Wasm, Deno, ReactFlow, and probably a few other things.<p>The server is written in Go, all of the Wasm is Rust and the UI is Typescript. There are SDKs for Go, Python, and Node. We chose these languages for the SDKs because we've been working in them daily for the past 10+ years.<p><i>The reasons for the tech choices are explained in detail here:</i> <a href="https://docs.streamdal.com/en/resources-support/open-source/">https://docs.streamdal.com/en/resources-support/open-source/</a><p>— — —<p>LAST PART<p>OK, that's it. What do you think? Is it useful? Can we answer anything?<p>- If you like what you're seeing, give our repo a star: <a href="https://github.com/streamdal/streamdal">https://github.com/streamdal/streamdal</a><p>- And If you <i>really</i> like what you're seeing, come talk to us on our discord: <a href="https://discord.gg/streamdal" rel="nofollow noreferrer">https://discord.gg/streamdal</a><p>Talk soon!<p>- Daniel & Ustin
Show HN: Formbricks – Open-source alternative to Typeform and Sprig
Show HN: Formbricks – Open-source alternative to Typeform and Sprig
Show HN: MicroTCP, a minimal TCP/IP stack
Show HN: MicroTCP, a minimal TCP/IP stack
Show HN: MicroTCP, a minimal TCP/IP stack
Show HN: MicroTCP, a minimal TCP/IP stack
Show HN: MicroTCP, a minimal TCP/IP stack
Show HN: Light implementation of Event Sourcing using PostgreSQL as event store
Hi everyone,<p>If you have a Java Spring Boot application with a PostgreSQL database, you can implement Event Sourcing without introducing new specialized databases or frameworks.<p>If you have an application dealing with an entity called Order, you should adopt Event Sourcing to keep track of all changes, and know how the Order got into the current state.<p>Event Sourcing gives you:<p>1. the true history of the system (audit and traceability),<p>2. the ability to put the system in any prior state (debugging),<p>3. the ability to create read projections from events as needed to respond to new demands.<p>There are several well-known specialized frameworks and databases for Event Sourcing: EventStoreDB, Marten, Eventuate, to name a few.
But adopting a new framework or database you are not familiar with may stop you from trying the Event Sourcing pattern in your project. But you can actually implement Event Sourcing with a few classes and use PostgreSQL as an event store.<p>The "postgresql-event-sourcing" project is a reference implementation of an event-sourced system that uses PostgreSQL as an event store built with Spring Boot. Fork the repository and use it as a template for your projects. Or clone the repository and run end-to-end tests to see how everything works together.<p>The project describes in detail:<p>- database model for storing events,<p>- synchronous and asynchronous event handlers,<p>- CQRS,<p>- Transactional Outbox pattern,<p>- Polling Publisher pattern,<p>- optimized publisher that uses PostgreSQL LISTEN/NOTIFY capabilities,<p>- and more.<p>This project can be easily extended to comply with your domain model.<p>The source code is available on GitHub <<a href="https://github.com/eugene-khyst/postgresql-event-sourcing">https://github.com/eugene-khyst/postgresql-event-sourcing</a>>.
Show HN: Light implementation of Event Sourcing using PostgreSQL as event store
Hi everyone,<p>If you have a Java Spring Boot application with a PostgreSQL database, you can implement Event Sourcing without introducing new specialized databases or frameworks.<p>If you have an application dealing with an entity called Order, you should adopt Event Sourcing to keep track of all changes, and know how the Order got into the current state.<p>Event Sourcing gives you:<p>1. the true history of the system (audit and traceability),<p>2. the ability to put the system in any prior state (debugging),<p>3. the ability to create read projections from events as needed to respond to new demands.<p>There are several well-known specialized frameworks and databases for Event Sourcing: EventStoreDB, Marten, Eventuate, to name a few.
But adopting a new framework or database you are not familiar with may stop you from trying the Event Sourcing pattern in your project. But you can actually implement Event Sourcing with a few classes and use PostgreSQL as an event store.<p>The "postgresql-event-sourcing" project is a reference implementation of an event-sourced system that uses PostgreSQL as an event store built with Spring Boot. Fork the repository and use it as a template for your projects. Or clone the repository and run end-to-end tests to see how everything works together.<p>The project describes in detail:<p>- database model for storing events,<p>- synchronous and asynchronous event handlers,<p>- CQRS,<p>- Transactional Outbox pattern,<p>- Polling Publisher pattern,<p>- optimized publisher that uses PostgreSQL LISTEN/NOTIFY capabilities,<p>- and more.<p>This project can be easily extended to comply with your domain model.<p>The source code is available on GitHub <<a href="https://github.com/eugene-khyst/postgresql-event-sourcing">https://github.com/eugene-khyst/postgresql-event-sourcing</a>>.
Show HN: Light implementation of Event Sourcing using PostgreSQL as event store
Hi everyone,<p>If you have a Java Spring Boot application with a PostgreSQL database, you can implement Event Sourcing without introducing new specialized databases or frameworks.<p>If you have an application dealing with an entity called Order, you should adopt Event Sourcing to keep track of all changes, and know how the Order got into the current state.<p>Event Sourcing gives you:<p>1. the true history of the system (audit and traceability),<p>2. the ability to put the system in any prior state (debugging),<p>3. the ability to create read projections from events as needed to respond to new demands.<p>There are several well-known specialized frameworks and databases for Event Sourcing: EventStoreDB, Marten, Eventuate, to name a few.
But adopting a new framework or database you are not familiar with may stop you from trying the Event Sourcing pattern in your project. But you can actually implement Event Sourcing with a few classes and use PostgreSQL as an event store.<p>The "postgresql-event-sourcing" project is a reference implementation of an event-sourced system that uses PostgreSQL as an event store built with Spring Boot. Fork the repository and use it as a template for your projects. Or clone the repository and run end-to-end tests to see how everything works together.<p>The project describes in detail:<p>- database model for storing events,<p>- synchronous and asynchronous event handlers,<p>- CQRS,<p>- Transactional Outbox pattern,<p>- Polling Publisher pattern,<p>- optimized publisher that uses PostgreSQL LISTEN/NOTIFY capabilities,<p>- and more.<p>This project can be easily extended to comply with your domain model.<p>The source code is available on GitHub <<a href="https://github.com/eugene-khyst/postgresql-event-sourcing">https://github.com/eugene-khyst/postgresql-event-sourcing</a>>.
Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
Hi HN,<p>We’re excited to announce that Phind now defaults to our own model that matches and exceeds GPT-4’s coding abilities while running 5x faster. You can now get high quality answers for technical questions in 10 seconds instead of 50.<p>The current 7th-generation Phind Model is built on top of our open-source CodeLlama-34B fine-tunes that were the first models to beat GPT-4’s score on HumanEval and are still the best open source coding models overall by a wide margin: <a href="https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/bigcode/bigcode-models-leaderb...</a>.<p>This new model has been fine-tuned on an additional 70B+ tokens of high quality code and reasoning problems and exhibits a HumanEval score of 74.7%. However, we’ve found that HumanEval is a poor indicator of real-world helpfulness. After deploying previous iterations of the Phind Model on our service, we’ve collected detailed feedback and noticed that our model matches or exceeds GPT-4’s helpfulness most of the time on real-world questions. Many in our Discord community have begun using Phind exclusively with the Phind Model despite also having unlimited access to GPT-4.<p>One of the Phind Model’s key advantages is that it's very fast. We’ve been able to achieve a 5x speedup over GPT-4 by running our model on H100s using the new TensorRT-LLM library from NVIDIA. We can achieve up to 100 tokens per second single-stream while GPT-4 runs around 20 tokens per second at best.<p>Another key advantage of the Phind Model is context – it supports up to 16k tokens. We currently allow inputs of up to 12k tokens on the website and reserve the remaining 4k for web results.<p>There are still some rough edges with the Phind Model and we’ll continue improving it constantly. One area where it still suffers is consistency — on certain challenging questions where it is capable of getting the right answer, the Phind Model might take more generations to get to the right answer than GPT-4.<p>We’d love to hear your feedback.<p>Cheers,<p>The Phind Team
Show HN: Gobeats, a Google Drive command line music player
Show HN: Launch a private Ethereum Testnet with all clients and MEV infra
We've been working with the Ethereum Foundation & Flashbots to build tooling and infrastructure for developers to test various workflows for Ethereum. As part of that work, anyone can now spin up a local Ethereum dev net with the entire Flashbots mev-boost infra (relayer, builder, boost) using the ethereum-package. This package supports all EL and CL client types, works on Kubernetes for scale testing, and comes with a few bells and whistles like metrics, mock-builders, and beacon chain explorers.<p>This is the de-facto tool for teams modifying and testing the consensus layer.<p>Here's a full tutorial - <a href="https://docs.kurtosis.com/how-to-full-mev-with-ethereum-package/" rel="nofollow noreferrer">https://docs.kurtosis.com/how-to-full-mev-with-ethereum-pack...</a>
Show HN: I made a ChatGPT UI that looks like Slack
Show HN: YCombinato – A domain-hacked "Hacker News" client
Hi HN,<p>This is a little "HN Reader" experiment I made using a domain hack of "news.ycombinato.com" <- <i>notice the "r" is missing</i>.<p>Basically, I thought it would be cool to make a "clone" of the "Hacker News" URL so you can quickly navigate to "YCombinato" from any post on "news.ycombinator" by just <i>dropping the r</i> in the domain. The benefit is a few extra features like; sorting, searching etc...<p>Unfortunately I realized it's essentially a phishing attack :( according to the browser, which means there will probably be a warning message on most browsers... but it's still usable and hopefully people find it enjoyable.<p>There's so many "Hacker News" clients now that it's almost equivalent to the TODO app for web developers. This project does take it pretty far by replicating the URL, so if moderators are unhappy with it, please let me know<p>It's a static site using the [1] algolia API, but I tried hard to make it fast and snappy. It's fully [2] open-source and hosted on github too.<p>Anyway, thought I'd share it, let me know what you think!<p>[1] <a href="https://hn.algolia.com/api" rel="nofollow noreferrer">https://hn.algolia.com/api</a><p>[2] <a href="https://github.com/benwinding/ycombinato">https://github.com/benwinding/ycombinato</a>
Show HN: Unlogged – open-source record and replay for Java
Hello HN! Parth, and Shardul here. We have been building unlogged.io for the last 21 months. We started as a time travel debugger and pivoted to record and replay with assertions, mocking, and code coverage. You can save the replays in the form of a JSON and commit them to your git.<p>Both Parth and I come from an e-commerce/payments background where production bugs meant heavy financial losses. Big billion days/Black Friday sales meant months of code freezes with low productivity. Before committing the code, we wanted to replay production traffic and know the breaking changes right away, like in sub-second. Kind of like unit+integration tests on steroids.<p>So, we built an SDK that adds probes to the code in compile time. The SDK logs code execution, in detail.<p>Git: <a href="https://github.com/unloggedio/unlogged-sdk">https://github.com/unloggedio/unlogged-sdk</a><p>We also built an IDE plugin that keeps monitoring code changes, hot reloads these changes, replays the relevant methods, and alerts on failing replays. It also lets developers call Java methods directly, mock downstream methods in run time, highlight code coverage in real-time, and show performance numbers for methods with inlay hints. (right above each method)<p>Git: <a href="https://github.com/unloggedio/intellij-java-plugin">https://github.com/unloggedio/intellij-java-plugin</a><p>We are excited to launch the first version of our product that replays with assertions + mocking + code coverage reports right inside the IDE.<p>Link to our IntelliJ plugin: <a href="https://plugins.jetbrains.com/plugin/18529-unlogged/" rel="nofollow noreferrer">https://plugins.jetbrains.com/plugin/18529-unlogged/</a><p>Record and Replay Demo: <a href="https://www.youtube.com/watch?v=muCyE-doEB0">https://www.youtube.com/watch?v=muCyE-doEB0</a><p>Define Assertions on Replay: <a href="https://www.youtube.com/watch?v=YKsi1p634-M">https://www.youtube.com/watch?v=YKsi1p634-M</a><p>Track Code Coverage: <a href="https://www.youtube.com/watch?v=NMmp954kfaU">https://www.youtube.com/watch?v=NMmp954kfaU</a><p>Generate JUnit Test Cases: <a href="https://www.youtube.com/watch?v=rTUmg5b1Z_Q">https://www.youtube.com/watch?v=rTUmg5b1Z_Q</a><p>Mocking when replaying: <a href="https://www.youtube.com/watch?v=O_aqU1u-Kmw">https://www.youtube.com/watch?v=O_aqU1u-Kmw</a><p>Documentation: <a href="http://read.unlogged.io/" rel="nofollow noreferrer">http://read.unlogged.io/</a><p>Roadmap:<p>1. Create a production logger<p>-So that the performance impact is minimal<p>-out of the box masking PII from production logs<p>-creating meaningful input/return value combinations from production traffic to be replayed locally.<p>2. Creating a CI test runner that can integrate with CI/CD pipelines.<p>3. Auto-Replaying API endpoints of only the changed code.<p>4. Real-time alerts for the performance impact of code changes.<p>5. Creating a dashboard with reports, email/slack alerts.