The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Strich – Barcode scanning for web apps
Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps.
Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.
Show HN: Strich – Barcode scanning for web apps
Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps.
Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.
Show HN: Repo with a list of 80 decent companies hiring remotely in Europe
Tech-stack included
Show HN: Repo with a list of 80 decent companies hiring remotely in Europe
Tech-stack included
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
Show HN: Layerform – Open-source development environments using Terraform files
Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?
Show HN: Layerform – Open-source development environments using Terraform files
Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?
Show HN: Layerform – Open-source development environments using Terraform files
Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?
Show HN: Servicer, pm2 alternative built on Rust and systemd
Servicer is a CLI to create and manage services on systemd. I have used pm2 in production and find it easy to use. However a lot of its functionality is specific to node.js, and I would prefer not to run my rust server as a fork of a node process. Systemd on the other hand has most of the things I need, but I found it cumbersome to use. There are a bunch of different commands and configurations- the .service file, systemctl to view status, journald to view logs which make systemd more complex to setup. I had to google for the a template and commands every time.<p>Servicer abstracts this setup behind an easy to use CLI, for instance you can use `ser create index.js --interpreter node --enable --start` to create a `.service` file, enable it on boot and start it. Servicer will also help if you wish to write your own custom `.service` files. Run `ser edit foo --editor vi` to create a service file in Vim. Servicer will provide a starting template so you don't need to google it. There are additional utilities like `ser which index.js` to view the path of the service and unit file.<p>```
Paths for index.js.ser.service:
+--------------+-----------------------------------------------------------+
| name | path |
+--------------+-----------------------------------------------------------+
| Service file | /etc/systemd/system/index.js.ser.service |
+--------------+-----------------------------------------------------------+
| Unit file | /org/freedesktop/systemd1/unit/index_2ejs_2eser_2eservice |
+--------------+-----------------------------------------------------------+
```<p>Servicer is daemonless and does not run in the background. It simply sets up systemd and gets out of the way. There are no forked services, everything is natively set up on systemd. You don't need to worry about resource consumption or servicer going down which will cause your app to stop.<p>Do give it a spin and review the codebase. The code is open source and MIT licensed- <a href="https://github.com/servicer-labs/servicer">https://github.com/servicer-labs/servicer</a>
Show HN: Servicer, pm2 alternative built on Rust and systemd
Servicer is a CLI to create and manage services on systemd. I have used pm2 in production and find it easy to use. However a lot of its functionality is specific to node.js, and I would prefer not to run my rust server as a fork of a node process. Systemd on the other hand has most of the things I need, but I found it cumbersome to use. There are a bunch of different commands and configurations- the .service file, systemctl to view status, journald to view logs which make systemd more complex to setup. I had to google for the a template and commands every time.<p>Servicer abstracts this setup behind an easy to use CLI, for instance you can use `ser create index.js --interpreter node --enable --start` to create a `.service` file, enable it on boot and start it. Servicer will also help if you wish to write your own custom `.service` files. Run `ser edit foo --editor vi` to create a service file in Vim. Servicer will provide a starting template so you don't need to google it. There are additional utilities like `ser which index.js` to view the path of the service and unit file.<p>```
Paths for index.js.ser.service:
+--------------+-----------------------------------------------------------+
| name | path |
+--------------+-----------------------------------------------------------+
| Service file | /etc/systemd/system/index.js.ser.service |
+--------------+-----------------------------------------------------------+
| Unit file | /org/freedesktop/systemd1/unit/index_2ejs_2eser_2eservice |
+--------------+-----------------------------------------------------------+
```<p>Servicer is daemonless and does not run in the background. It simply sets up systemd and gets out of the way. There are no forked services, everything is natively set up on systemd. You don't need to worry about resource consumption or servicer going down which will cause your app to stop.<p>Do give it a spin and review the codebase. The code is open source and MIT licensed- <a href="https://github.com/servicer-labs/servicer">https://github.com/servicer-labs/servicer</a>
Show HN: Lottielab – Create product animations in the browser easily
Hi HN! Today we are releasing Lottielab, a web-based animation tool, to the public as an open beta. The main tool for editing and exporting Lottie animations today is Adobe After Effects, a 30-year-old visual effects tool that’s not fit for this purpose, has a steep learning curve, and requires a patchwork of error-prone plugins. With Lottielab, we are aiming to reduce the friction of creating and editing product animations by providing an easy-to-use editor with out-of-the-box support for import and export of the Lottie format and many others. Feel free to play around with the tool and let me know what you think - I'm here to answer your questions. Happy animating!
Show HN: Lottielab – Create product animations in the browser easily
Hi HN! Today we are releasing Lottielab, a web-based animation tool, to the public as an open beta. The main tool for editing and exporting Lottie animations today is Adobe After Effects, a 30-year-old visual effects tool that’s not fit for this purpose, has a steep learning curve, and requires a patchwork of error-prone plugins. With Lottielab, we are aiming to reduce the friction of creating and editing product animations by providing an easy-to-use editor with out-of-the-box support for import and export of the Lottie format and many others. Feel free to play around with the tool and let me know what you think - I'm here to answer your questions. Happy animating!
Show HN: Lottielab – Create product animations in the browser easily
Hi HN! Today we are releasing Lottielab, a web-based animation tool, to the public as an open beta. The main tool for editing and exporting Lottie animations today is Adobe After Effects, a 30-year-old visual effects tool that’s not fit for this purpose, has a steep learning curve, and requires a patchwork of error-prone plugins. With Lottielab, we are aiming to reduce the friction of creating and editing product animations by providing an easy-to-use editor with out-of-the-box support for import and export of the Lottie format and many others. Feel free to play around with the tool and let me know what you think - I'm here to answer your questions. Happy animating!
Show HN: Llama2 Embeddings FastAPI Server
Author here. I just wanted a quick and easy way to easily submit strings to a REST API and get back the embedding vectors in JSON using Llama2 and other similar LLMs, so I put this together over the past couple days. It's very quick and easy to set up and totally self-contained and self-hosted. You can easily add new models to it by simply adding the HuggingFace URL to the GGML format model weights. Two models are included by default, and these are automatically downloaded the first time it's run.<p>It lets you not only submit text strings and get back the embeddings, but also to compare two strings and get back their similarity score (i.e., the cosine similarity of their embedding vectors). You can also upload a plaintext file or PDF and get back all the embeddings for every sentence in the file as a zipped JSON file (and you can specify the layout of this JSON file).<p>Each time an embedding is computed for a given string with a given LLM, that vector is stored in the SQlite database and can be returned immediately. You can also search across all stored vectors easily using a query string; this uses FAISS which is integrated.<p>There are lots of nice performance enhancements, including parallel inference, db write queue, fully async everything, and even a RAM Disk feature to speed up model loading.<p>I’m working now on adding additional API endpoints for easily generating sentiment scores using presets for different focus areas, but that’s still work-in-progress (the code for this so far is in the repo though).
Show HN: Llama2 Embeddings FastAPI Server
Author here. I just wanted a quick and easy way to easily submit strings to a REST API and get back the embedding vectors in JSON using Llama2 and other similar LLMs, so I put this together over the past couple days. It's very quick and easy to set up and totally self-contained and self-hosted. You can easily add new models to it by simply adding the HuggingFace URL to the GGML format model weights. Two models are included by default, and these are automatically downloaded the first time it's run.<p>It lets you not only submit text strings and get back the embeddings, but also to compare two strings and get back their similarity score (i.e., the cosine similarity of their embedding vectors). You can also upload a plaintext file or PDF and get back all the embeddings for every sentence in the file as a zipped JSON file (and you can specify the layout of this JSON file).<p>Each time an embedding is computed for a given string with a given LLM, that vector is stored in the SQlite database and can be returned immediately. You can also search across all stored vectors easily using a query string; this uses FAISS which is integrated.<p>There are lots of nice performance enhancements, including parallel inference, db write queue, fully async everything, and even a RAM Disk feature to speed up model loading.<p>I’m working now on adding additional API endpoints for easily generating sentiment scores using presets for different focus areas, but that’s still work-in-progress (the code for this so far is in the repo though).
Show HN: Llama2 Embeddings FastAPI Server
Author here. I just wanted a quick and easy way to easily submit strings to a REST API and get back the embedding vectors in JSON using Llama2 and other similar LLMs, so I put this together over the past couple days. It's very quick and easy to set up and totally self-contained and self-hosted. You can easily add new models to it by simply adding the HuggingFace URL to the GGML format model weights. Two models are included by default, and these are automatically downloaded the first time it's run.<p>It lets you not only submit text strings and get back the embeddings, but also to compare two strings and get back their similarity score (i.e., the cosine similarity of their embedding vectors). You can also upload a plaintext file or PDF and get back all the embeddings for every sentence in the file as a zipped JSON file (and you can specify the layout of this JSON file).<p>Each time an embedding is computed for a given string with a given LLM, that vector is stored in the SQlite database and can be returned immediately. You can also search across all stored vectors easily using a query string; this uses FAISS which is integrated.<p>There are lots of nice performance enhancements, including parallel inference, db write queue, fully async everything, and even a RAM Disk feature to speed up model loading.<p>I’m working now on adding additional API endpoints for easily generating sentiment scores using presets for different focus areas, but that’s still work-in-progress (the code for this so far is in the repo though).
CSS Selectors: A Visual Guide
Show HN: AI-town, run your own custom AI world SIM with JavaScript
Hi HN community! We want to share AI-town, a deployable starter kit for building and customizing your own version of AI simulation - a virtual town where AI characters live, chat and socialize.<p>Inspired by great work from the Stanford Generative Agent paper (<a href="https://arxiv.org/abs/2304.03442" rel="nofollow noreferrer">https://arxiv.org/abs/2304.03442</a>).<p>A few features:
- Includes a convex.dev backed server-side game engine that handles global state
- Multiplayer ready. Deployment ready
- 100% Typescript
- Easily customizable. You can fork it, change character memories, add new sprites/tiles and you have a custom AI simulation<p>The goal is to democratize building your own simulation environment with AI agents. Would love to see the community build more complex interactions on top of this. Let us know what you think!<p>Demo: <a href="https://www.convex.dev/ai-town" rel="nofollow noreferrer">https://www.convex.dev/ai-town</a><p>I made a world Cat Town to demonstrate how to customize AI town. Using C(h)atGPT :)<p>Demo: <a href="https://cat-town.fly.dev/" rel="nofollow noreferrer">https://cat-town.fly.dev/</a>
Code: <a href="https://github.com/ykhli/cat-town">https://github.com/ykhli/cat-town</a>
Show HN: AI-town, run your own custom AI world SIM with JavaScript
Hi HN community! We want to share AI-town, a deployable starter kit for building and customizing your own version of AI simulation - a virtual town where AI characters live, chat and socialize.<p>Inspired by great work from the Stanford Generative Agent paper (<a href="https://arxiv.org/abs/2304.03442" rel="nofollow noreferrer">https://arxiv.org/abs/2304.03442</a>).<p>A few features:
- Includes a convex.dev backed server-side game engine that handles global state
- Multiplayer ready. Deployment ready
- 100% Typescript
- Easily customizable. You can fork it, change character memories, add new sprites/tiles and you have a custom AI simulation<p>The goal is to democratize building your own simulation environment with AI agents. Would love to see the community build more complex interactions on top of this. Let us know what you think!<p>Demo: <a href="https://www.convex.dev/ai-town" rel="nofollow noreferrer">https://www.convex.dev/ai-town</a><p>I made a world Cat Town to demonstrate how to customize AI town. Using C(h)atGPT :)<p>Demo: <a href="https://cat-town.fly.dev/" rel="nofollow noreferrer">https://cat-town.fly.dev/</a>
Code: <a href="https://github.com/ykhli/cat-town">https://github.com/ykhli/cat-town</a>