The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Roons – Mechanical Computer Kit
I built a mechanical computer kit: <a href="https://whomtech.com/show-hn" rel="nofollow">https://whomtech.com/show-hn</a><p>tl;dr: it's a cellular automaton on a "loom" of alternating bars, using contoured tiles to guide marbles through logic gates.<p>It's not just "Turing complete, job done"; I've tried to make it actually practical. Devices are compact, e.g. you can fit a binary adder into a 3cm square. It took me nearly two years and dozens of different approaches.<p>There's a sequence of interactive tutorials to try out, demo videos, and a janky simulator. I've also sent out a few prototype kits and have some more ready to go.<p>Please ask me anything, I will talk about this for hours.<p>-- Jesse
Show HN: An interactive demo of QR codes' error correction
Hi HN!
This is a hobby project of mine that recently landed me my first interview and helped me get my first internship offers.<p>Draw on a QR code, and the health bars will accurately display how close the QR code is to being unscannable. How few errors does it take to break a QR code? How many errors can a QR code handle? Counters at the bottom track your record minimum and maximum damage. (Can you figure out how to break a QR code with 0.0% damage to the actual data region?)<p>Also, click on the magnifying glass button to toggle between "draw mode" and "inspect mode".
I encourage you to use your phone's camera to scan the code as you draw and undo/redo to verify that the code really does break when the app says it does.<p>I wrote the underlying decoder in C++, and it's compiled to WebAssembly for the website.<p>I hope you find it interesting.
Show HN: An interactive demo of QR codes' error correction
Hi HN!
This is a hobby project of mine that recently landed me my first interview and helped me get my first internship offers.<p>Draw on a QR code, and the health bars will accurately display how close the QR code is to being unscannable. How few errors does it take to break a QR code? How many errors can a QR code handle? Counters at the bottom track your record minimum and maximum damage. (Can you figure out how to break a QR code with 0.0% damage to the actual data region?)<p>Also, click on the magnifying glass button to toggle between "draw mode" and "inspect mode".
I encourage you to use your phone's camera to scan the code as you draw and undo/redo to verify that the code really does break when the app says it does.<p>I wrote the underlying decoder in C++, and it's compiled to WebAssembly for the website.<p>I hope you find it interesting.
Show HN: AgenticSeek – Self-hosted alternative to cloud-based AI tools
I’ve spent the last two months building AgenticSeek, a privacy-focused alternative to cloud-based AI tools like ManusAI. It runs entirely on your machine—no API calls, no data leaks.<p>Why AgenticSeek?<p>Optimized for local LLMs (developed mostly on an RTX 3060 running deepseek r1 14b).<p>Truly private: All components (TTS, STT, planner) run locally.<p>More responsive than alternatives (we respond fast to issues + active Discord).<p>Designed to be fun—think JARVIS-like voice control, multi-agent workflows, and a slick web UI.<p>Current Features:<p>Web browsing (research + form filling), code write/fix, file management/search. Planning capabilites to use multiple agents for complex task.<p>Is it stable?
Prototype-stage—great for tinkerers.<p>Hoping to get feedback!
Show HN: AgenticSeek – Self-hosted alternative to cloud-based AI tools
I’ve spent the last two months building AgenticSeek, a privacy-focused alternative to cloud-based AI tools like ManusAI. It runs entirely on your machine—no API calls, no data leaks.<p>Why AgenticSeek?<p>Optimized for local LLMs (developed mostly on an RTX 3060 running deepseek r1 14b).<p>Truly private: All components (TTS, STT, planner) run locally.<p>More responsive than alternatives (we respond fast to issues + active Discord).<p>Designed to be fun—think JARVIS-like voice control, multi-agent workflows, and a slick web UI.<p>Current Features:<p>Web browsing (research + form filling), code write/fix, file management/search. Planning capabilites to use multiple agents for complex task.<p>Is it stable?
Prototype-stage—great for tinkerers.<p>Hoping to get feedback!
Show HN: An MCP server for understanding AWS costs
Hey all - I work at Vantage, a FinOps platform.<p>I know AI is peak hype right now. But it has definitely changed some of our dev workflows already. So we wanted to find a way to let our customers experiment with how they can use AI to make their cloud cost management work more productive.<p>The MCP Server acts as a connector between LLMs (right now only Claude, Cursor support it but ChatGPT and Google Gemini coming soon) and your cost and usage data on Vantage which supports 20+ cloud infra providers including AWS, Datadog, Mongo, etc. (You have to have a Vantage account to use it since it's using the Vantage API)<p>Video demo: <a href="https://www.youtube.com/watch?v=n0VP2NlUvRU" rel="nofollow">https://www.youtube.com/watch?v=n0VP2NlUvRU</a><p>Repo: <a href="https://github.com/vantage-sh/vantage-mcp-server">https://github.com/vantage-sh/vantage-mcp-server</a><p>It's really impressive how capable the latest-gen models are with an MCP server and an API. So far we have found it useful for:<p>Ad-Hoc questions: "What's our non-prod cloud spend per engineer if we have 25 engineers"<p>Action plans: "Find unallocated spend and look for clues how it should be tagged"<p>Multi-tool workflows: "Find recent cost spikes that look like they could have come from eng changes and look for GitHub PR's merged around the same time" (using it in combination with the GitHub MCP)<p>Thought I'd share, let me know if you have questions.
Show HN: An MCP server for understanding AWS costs
Hey all - I work at Vantage, a FinOps platform.<p>I know AI is peak hype right now. But it has definitely changed some of our dev workflows already. So we wanted to find a way to let our customers experiment with how they can use AI to make their cloud cost management work more productive.<p>The MCP Server acts as a connector between LLMs (right now only Claude, Cursor support it but ChatGPT and Google Gemini coming soon) and your cost and usage data on Vantage which supports 20+ cloud infra providers including AWS, Datadog, Mongo, etc. (You have to have a Vantage account to use it since it's using the Vantage API)<p>Video demo: <a href="https://www.youtube.com/watch?v=n0VP2NlUvRU" rel="nofollow">https://www.youtube.com/watch?v=n0VP2NlUvRU</a><p>Repo: <a href="https://github.com/vantage-sh/vantage-mcp-server">https://github.com/vantage-sh/vantage-mcp-server</a><p>It's really impressive how capable the latest-gen models are with an MCP server and an API. So far we have found it useful for:<p>Ad-Hoc questions: "What's our non-prod cloud spend per engineer if we have 25 engineers"<p>Action plans: "Find unallocated spend and look for clues how it should be tagged"<p>Multi-tool workflows: "Find recent cost spikes that look like they could have come from eng changes and look for GitHub PR's merged around the same time" (using it in combination with the GitHub MCP)<p>Thought I'd share, let me know if you have questions.
Show HN: An MCP server for understanding AWS costs
Hey all - I work at Vantage, a FinOps platform.<p>I know AI is peak hype right now. But it has definitely changed some of our dev workflows already. So we wanted to find a way to let our customers experiment with how they can use AI to make their cloud cost management work more productive.<p>The MCP Server acts as a connector between LLMs (right now only Claude, Cursor support it but ChatGPT and Google Gemini coming soon) and your cost and usage data on Vantage which supports 20+ cloud infra providers including AWS, Datadog, Mongo, etc. (You have to have a Vantage account to use it since it's using the Vantage API)<p>Video demo: <a href="https://www.youtube.com/watch?v=n0VP2NlUvRU" rel="nofollow">https://www.youtube.com/watch?v=n0VP2NlUvRU</a><p>Repo: <a href="https://github.com/vantage-sh/vantage-mcp-server">https://github.com/vantage-sh/vantage-mcp-server</a><p>It's really impressive how capable the latest-gen models are with an MCP server and an API. So far we have found it useful for:<p>Ad-Hoc questions: "What's our non-prod cloud spend per engineer if we have 25 engineers"<p>Action plans: "Find unallocated spend and look for clues how it should be tagged"<p>Multi-tool workflows: "Find recent cost spikes that look like they could have come from eng changes and look for GitHub PR's merged around the same time" (using it in combination with the GitHub MCP)<p>Thought I'd share, let me know if you have questions.
Show HN: Create your own finetuned AI model using Google Sheets
Hello HN,<p>We built Promptrepo to make finetuning accessible to product teams — not just ML engineers. Last week, OpenAI’s CPO shared how they use fine-tuning for everything from customer support to deep research, and called it the future for serious AI teams. Yet most teams I know still rely on prompting, because fine-tuning is too technical, while the people who have the training data (product managers and domain experts) are often non-technical. With Promptrepo, they can now:<p>- Add training examples in Google Sheets<p>- Click a button to train<p>- Deploy and test instantly<p>- Use OpenAI, Claude, Gemini or Llama models<p>We’ve used this internally for years to power AI workflows in our products (Formfacade, Formesign, Neartail), and we're now opening it up to others. Would love your feedback and happy to answer any questions!<p>---<p>Try it free - <a href="https://promptrepo.com/finetune" rel="nofollow">https://promptrepo.com/finetune</a><p>Demo video - <a href="https://www.youtube.com/watch?v=e1CTin1bD0w" rel="nofollow">https://www.youtube.com/watch?v=e1CTin1bD0w</a><p>Why we built it - <a href="https://guesswork.co/support/post/fine-tuning-is-the-future-and-now-its-within-every.anc-ddfd2598-5798-423d-b6ec-e7d84e98847a.html" rel="nofollow">https://guesswork.co/support/post/fine-tuning-is-the-future-...</a>
Show HN: Create your own finetuned AI model using Google Sheets
Hello HN,<p>We built Promptrepo to make finetuning accessible to product teams — not just ML engineers. Last week, OpenAI’s CPO shared how they use fine-tuning for everything from customer support to deep research, and called it the future for serious AI teams. Yet most teams I know still rely on prompting, because fine-tuning is too technical, while the people who have the training data (product managers and domain experts) are often non-technical. With Promptrepo, they can now:<p>- Add training examples in Google Sheets<p>- Click a button to train<p>- Deploy and test instantly<p>- Use OpenAI, Claude, Gemini or Llama models<p>We’ve used this internally for years to power AI workflows in our products (Formfacade, Formesign, Neartail), and we're now opening it up to others. Would love your feedback and happy to answer any questions!<p>---<p>Try it free - <a href="https://promptrepo.com/finetune" rel="nofollow">https://promptrepo.com/finetune</a><p>Demo video - <a href="https://www.youtube.com/watch?v=e1CTin1bD0w" rel="nofollow">https://www.youtube.com/watch?v=e1CTin1bD0w</a><p>Why we built it - <a href="https://guesswork.co/support/post/fine-tuning-is-the-future-and-now-its-within-every.anc-ddfd2598-5798-423d-b6ec-e7d84e98847a.html" rel="nofollow">https://guesswork.co/support/post/fine-tuning-is-the-future-...</a>
Show HN: Create your own finetuned AI model using Google Sheets
Hello HN,<p>We built Promptrepo to make finetuning accessible to product teams — not just ML engineers. Last week, OpenAI’s CPO shared how they use fine-tuning for everything from customer support to deep research, and called it the future for serious AI teams. Yet most teams I know still rely on prompting, because fine-tuning is too technical, while the people who have the training data (product managers and domain experts) are often non-technical. With Promptrepo, they can now:<p>- Add training examples in Google Sheets<p>- Click a button to train<p>- Deploy and test instantly<p>- Use OpenAI, Claude, Gemini or Llama models<p>We’ve used this internally for years to power AI workflows in our products (Formfacade, Formesign, Neartail), and we're now opening it up to others. Would love your feedback and happy to answer any questions!<p>---<p>Try it free - <a href="https://promptrepo.com/finetune" rel="nofollow">https://promptrepo.com/finetune</a><p>Demo video - <a href="https://www.youtube.com/watch?v=e1CTin1bD0w" rel="nofollow">https://www.youtube.com/watch?v=e1CTin1bD0w</a><p>Why we built it - <a href="https://guesswork.co/support/post/fine-tuning-is-the-future-and-now-its-within-every.anc-ddfd2598-5798-423d-b6ec-e7d84e98847a.html" rel="nofollow">https://guesswork.co/support/post/fine-tuning-is-the-future-...</a>
Show HN: Create your own finetuned AI model using Google Sheets
Hello HN,<p>We built Promptrepo to make finetuning accessible to product teams — not just ML engineers. Last week, OpenAI’s CPO shared how they use fine-tuning for everything from customer support to deep research, and called it the future for serious AI teams. Yet most teams I know still rely on prompting, because fine-tuning is too technical, while the people who have the training data (product managers and domain experts) are often non-technical. With Promptrepo, they can now:<p>- Add training examples in Google Sheets<p>- Click a button to train<p>- Deploy and test instantly<p>- Use OpenAI, Claude, Gemini or Llama models<p>We’ve used this internally for years to power AI workflows in our products (Formfacade, Formesign, Neartail), and we're now opening it up to others. Would love your feedback and happy to answer any questions!<p>---<p>Try it free - <a href="https://promptrepo.com/finetune" rel="nofollow">https://promptrepo.com/finetune</a><p>Demo video - <a href="https://www.youtube.com/watch?v=e1CTin1bD0w" rel="nofollow">https://www.youtube.com/watch?v=e1CTin1bD0w</a><p>Why we built it - <a href="https://guesswork.co/support/post/fine-tuning-is-the-future-and-now-its-within-every.anc-ddfd2598-5798-423d-b6ec-e7d84e98847a.html" rel="nofollow">https://guesswork.co/support/post/fine-tuning-is-the-future-...</a>
Show HN: Memex is a Claude Code alternative built on Rust+Tauri for vibe coding
Hi HN,<p>TL;DR Memex is a cross-platform desktop app for vibe coding. Think ChatGPT + Claude Code rolled into one.<p>Why we built it: We love chat tools like Perplexity and ChatGPT. We also love coding agents, like in Cursor and Windsurf. We don’t like that web-based app builders are opinionated about tech stack and we can’t run them locally. So, we built Memex to be a chat tool + coding agent that supports any tech stack.<p>What it can do today: Claude Code-like coding. Agentic web search / research. Pre-built templates (e.g. fullstack, iOS, python + modal, etc). Inline data analysis + viz. Checkpointing (shadow git repo). Privacy mode.<p>How it works: Written in TS+Rust+Python, using Tauri for the cross-platform build (macOS, Windows, Linux). It has a bundled python environment for data analysis. Agent uses a mix of Sonnet 3.7 + Haiku.<p>Status & roadmap: Free download with free tier and paid plan: <a href="https://memex.tech" rel="nofollow">https://memex.tech</a>. Up next: [1] Additional model support (e.g. Gemini 2.5). [2] MCP support. [3] Computer use.<p>Ask: Kick the tires. Give us feedback on product + roadmap. If you love it – spread the word!<p>Thanks! David
Show HN: Heart Rate Zones Plus – The first iOS app I developed
I built this iOS app because I wanted to get an overview of my time in zones per week without checking zones after every workout manually - Now I'm looking for feedback.<p>Description: Track time in heart rate zones. Track per day, week, month, 7 days and 30 days time period and how much time you spend in each zone. Set goals & visualize progress. Get details about heart rates zones of your workouts.<p>Features: Custom time periods, Workout to zone attribution to get a feeling which sport attributed most to each zone, Multiple zone calculation methods, Set personal time goals for any zone, Workout breakdown<p>Pricing: Free<p>Privacy: Nothing is tracked or send somewhere. Data is just on your device.<p>Any feedback and features request is appreciated.<p>Download: <a href="https://apps.apple.com/us/app/heart-rate-zones-plus/id6744743232">https://apps.apple.com/us/app/heart-rate-zones-plus/id674474...</a><p>Video of the app in action: <a href="https://www.youtube.com/shorts/-qtHxEdMEv0" rel="nofollow">https://www.youtube.com/shorts/-qtHxEdMEv0</a>
Show HN: Heart Rate Zones Plus – The first iOS app I developed
I built this iOS app because I wanted to get an overview of my time in zones per week without checking zones after every workout manually - Now I'm looking for feedback.<p>Description: Track time in heart rate zones. Track per day, week, month, 7 days and 30 days time period and how much time you spend in each zone. Set goals & visualize progress. Get details about heart rates zones of your workouts.<p>Features: Custom time periods, Workout to zone attribution to get a feeling which sport attributed most to each zone, Multiple zone calculation methods, Set personal time goals for any zone, Workout breakdown<p>Pricing: Free<p>Privacy: Nothing is tracked or send somewhere. Data is just on your device.<p>Any feedback and features request is appreciated.<p>Download: <a href="https://apps.apple.com/us/app/heart-rate-zones-plus/id6744743232">https://apps.apple.com/us/app/heart-rate-zones-plus/id674474...</a><p>Video of the app in action: <a href="https://www.youtube.com/shorts/-qtHxEdMEv0" rel="nofollow">https://www.youtube.com/shorts/-qtHxEdMEv0</a>
Show HN: Sim Studio – Open-Source Agent Workflow GUI
Hi HN! We're Emir and Waleed, and we're building Sim Studio (<a href="https://simstudio.ai" rel="nofollow">https://simstudio.ai</a>), an open-source drag and drop UI for building and managing multi-agent workflows as a directed graph. You can define how agents interact with each other, use tools, and handle complex logic like branching, loops, transformations, and conditional execution.<p>Our repo is <a href="https://github.com/simstudioai/sim">https://github.com/simstudioai/sim</a>, docs are at <a href="https://docs.simstudio.ai/introduction" rel="nofollow">https://docs.simstudio.ai/introduction</a>, and we have a demo here: <a href="https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4" rel="nofollow">https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4</a><p>Building reliable, multi-step agent systems with current frameworks often gets complicated fast. In OpenAI's 'practical guide to building agents', they claim that the non-declarative approach and single multi-step agents are the best path forward, but from experience and experimentation, we disagree. Debugging these implicit flows across multiple agent calls and tool uses is painful, and iterating on the logic or prompts becomes slow.<p>We built Sim Studio because we believe defining the workflow explicitly and visually is the key to building more reliable and maintainable agentic applications. In Sim Studio, you design the entire architecture, comprising of agent blocks that have system prompts, a variety of models (hosted and local via ollama), tools with granular tool use control, and structured output.<p>We have plenty of pre-built integrations that you can use as standalone blocks or as tools for your agents. The nodes are all connected with if/else conditional blocks, llm-based routing, loops, and branching logic for specialized agents.<p>Also, the visual graph isn't just for prototyping and <i>is</i> actually executable. You can run simulations of the workflows 1, 10, 100 times to see how modifying any small system prompt change, underlying model, or tool call change change impacts the overall performance of the workflow.<p>You can trigger the workflows manually, deploy as an API and interact via HTTP, or schedule the workflows to run periodically. They can also be set up to trigger on incoming webhooks and deployed as standalone chat instances that can be password or domain-protected.<p>We have granular trace spans, logs, and observability built-in so you can easily compare and contrast performance across different model providers and tools. All of these things enable a tighter feedback loop and significantly faster iteration.<p>So far, users have built deep research agents to detect application fraud, chatbots to interface with their internal HR documentation, and agents to automate communication between manufacturing facilities.<p>Sim Studio is Apache 2.0 licensed, and fully open source.<p>We're excited about bringing a visual, workflow-centric approach to agent development. We think it makes building robust, complex agentic workflows far more accessible and reliable. We'd love to hear the HN community's thoughts!
Show HN: Sim Studio – Open-Source Agent Workflow GUI
Hi HN! We're Emir and Waleed, and we're building Sim Studio (<a href="https://simstudio.ai" rel="nofollow">https://simstudio.ai</a>), an open-source drag and drop UI for building and managing multi-agent workflows as a directed graph. You can define how agents interact with each other, use tools, and handle complex logic like branching, loops, transformations, and conditional execution.<p>Our repo is <a href="https://github.com/simstudioai/sim">https://github.com/simstudioai/sim</a>, docs are at <a href="https://docs.simstudio.ai/introduction" rel="nofollow">https://docs.simstudio.ai/introduction</a>, and we have a demo here: <a href="https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4" rel="nofollow">https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4</a><p>Building reliable, multi-step agent systems with current frameworks often gets complicated fast. In OpenAI's 'practical guide to building agents', they claim that the non-declarative approach and single multi-step agents are the best path forward, but from experience and experimentation, we disagree. Debugging these implicit flows across multiple agent calls and tool uses is painful, and iterating on the logic or prompts becomes slow.<p>We built Sim Studio because we believe defining the workflow explicitly and visually is the key to building more reliable and maintainable agentic applications. In Sim Studio, you design the entire architecture, comprising of agent blocks that have system prompts, a variety of models (hosted and local via ollama), tools with granular tool use control, and structured output.<p>We have plenty of pre-built integrations that you can use as standalone blocks or as tools for your agents. The nodes are all connected with if/else conditional blocks, llm-based routing, loops, and branching logic for specialized agents.<p>Also, the visual graph isn't just for prototyping and <i>is</i> actually executable. You can run simulations of the workflows 1, 10, 100 times to see how modifying any small system prompt change, underlying model, or tool call change change impacts the overall performance of the workflow.<p>You can trigger the workflows manually, deploy as an API and interact via HTTP, or schedule the workflows to run periodically. They can also be set up to trigger on incoming webhooks and deployed as standalone chat instances that can be password or domain-protected.<p>We have granular trace spans, logs, and observability built-in so you can easily compare and contrast performance across different model providers and tools. All of these things enable a tighter feedback loop and significantly faster iteration.<p>So far, users have built deep research agents to detect application fraud, chatbots to interface with their internal HR documentation, and agents to automate communication between manufacturing facilities.<p>Sim Studio is Apache 2.0 licensed, and fully open source.<p>We're excited about bringing a visual, workflow-centric approach to agent development. We think it makes building robust, complex agentic workflows far more accessible and reliable. We'd love to hear the HN community's thoughts!
Show HN: Flowcode – Turing-complete visual programming platform
Hey HN! I’m Gabriel, and I’m excited to share a project I’ve been working on for the last few years. Flowcode is a visual programming platform that tries to combine the best of both worlds (code and visual). Over the years I found myself repeatedly drawing architectures and logic. It was always my dream to just press “run” instead of having to write them in code afterwards. But none of the visual tools I found were flexible and transparent enough for building real products.<p>I think that visual programming fits perfectly with modern backend dev tasks that revolve around connecting different services with basic logic. Flowcode is meant to speed up and simplify those tasks, leaving more time to think about design and solve design problems. Visual programming also works really well for developing workflows involving LLM calls that are non-deterministic and require a lot of debugging and prompt tweaking.<p>There are many other visual/low code tools, but they all offer limited control and flexibility (no concurrency, loops, transparency) and most suffer from the same problems (vendor lock-in, hard to integrate with existing code etc.).<p>Flowcode is built on an open source visual programming language (Flyde <a href="https://github.com/flydelabs/flyde">https://github.com/flydelabs/flyde</a>, which I launched last year here on HN - <a href="https://news.ycombinator.com/item?id=39628285">https://news.ycombinator.com/item?id=39628285</a>). This means Flowcode has true concurrency, no vendor lock-in (you can export flows as .flyde files), is Turing-complete (loops, recursion, control flows, multiple IOs etc.), lets you fork any node, integrates with code via an SDK and more.<p>I’d love to hear your thoughts and feedback.
Show HN: Flowcode – Turing-complete visual programming platform
Hey HN! I’m Gabriel, and I’m excited to share a project I’ve been working on for the last few years. Flowcode is a visual programming platform that tries to combine the best of both worlds (code and visual). Over the years I found myself repeatedly drawing architectures and logic. It was always my dream to just press “run” instead of having to write them in code afterwards. But none of the visual tools I found were flexible and transparent enough for building real products.<p>I think that visual programming fits perfectly with modern backend dev tasks that revolve around connecting different services with basic logic. Flowcode is meant to speed up and simplify those tasks, leaving more time to think about design and solve design problems. Visual programming also works really well for developing workflows involving LLM calls that are non-deterministic and require a lot of debugging and prompt tweaking.<p>There are many other visual/low code tools, but they all offer limited control and flexibility (no concurrency, loops, transparency) and most suffer from the same problems (vendor lock-in, hard to integrate with existing code etc.).<p>Flowcode is built on an open source visual programming language (Flyde <a href="https://github.com/flydelabs/flyde">https://github.com/flydelabs/flyde</a>, which I launched last year here on HN - <a href="https://news.ycombinator.com/item?id=39628285">https://news.ycombinator.com/item?id=39628285</a>). This means Flowcode has true concurrency, no vendor lock-in (you can export flows as .flyde files), is Turing-complete (loops, recursion, control flows, multiple IOs etc.), lets you fork any node, integrates with code via an SDK and more.<p>I’d love to hear your thoughts and feedback.
Show HN: A pure WebGL image editor with filters, crop and perspective correction
I'm working on a pure js webgl image editor with effects, filters, crop & perspective correction, etc.
My goal is to give the community an opensource solution as unfortunately most comparable apps are closed sources.<p><a href="https://mini2-photo-editor.netlify.app" rel="nofollow">https://mini2-photo-editor.netlify.app</a> to try it out (<a href="https://github.com/xdadda/mini-photo-editor">https://github.com/xdadda/mini-photo-editor</a>)