The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I built an open-source data pipeline tool in Go

Every data pipeline job I had to tackle required quite a few components to set up:<p>- One tool to ingest data<p>- Another one to transform it<p>- If you wanted to run Python, set up an orchestrator<p>- If you need to check the data, a data quality tool<p>Let alone this being hard to set up and taking time, it is also pretty high-maintenance. I had to do a lot of infra work, and while this being billable hours for me I didn’t enjoy the work at all. For some parts of it, there were nice solutions like dbt, but in the end for an end-to-end workflow, it didn’t work. That’s why I decided to build an end-to-end solution that could take care of data ingestion, transformation, and Python stuff. Initially, it was just for our own usage, but in the end, we thought this could be a useful tool for everyone.<p>In its core, Bruin is a data framework that consists of a CLI application written in Golang, and a VS Code extension that supports it with a local UI.<p>Bruin supports quite a few stuff:<p>- Data ingestion using ingestr (<a href="https://github.com/bruin-data/ingestr">https://github.com/bruin-data/ingestr</a>)<p>- Data transformation in SQL & Python, similar to dbt<p>- Python env management using uv<p>- Built-in data quality checks<p>- Secrets management<p>- Query validation & SQL parsing<p>- Built-in templates for common scenarios, e.g. Shopify, Notion, Gorgias, BigQuery, etc<p>This means that you can write end-to-end pipelines within the same framework and get it running with a single command. You can run it on your own computer, on GitHub Actions, or in an EC2 instance somewhere. Using the templates, you can also have ready-to-go pipelines with modeled data for your data warehouse in seconds.<p>It includes an open-source VS Code extension as well, which allows working with the data pipelines locally, in a more visual way. The resulting changes are all in code, which means everything is version-controlled regardless, it just adds a nice layer.<p>Bruin can run SQL, Python, and data ingestion workflows, as well as quality checks. For Python stuff, we use the awesome (and it really is awesome!) uv under the hood, install dependencies in an isolated environment, and install and manage the Python versions locally, all in a cross-platform way. Then in order to manage data uploads to the data warehouse, it uses dlt under the hood to upload the data to the destination. It also uses Arrow’s memory-mapped files to easily access the data between the processes before uploading them to the destination.<p>We went with Golang because of its speed and strong concurrency primitives, but more importantly, I knew Go better than the other languages available to me and I enjoy writing Go, so there’s also that.<p>We had a small pool of beta testers for quite some time and I am really excited to launch Bruin CLI to the rest of the world and get feedback from you all. I know it is not often to build data tooling in Go but I believe we found ourselves in a nice spot in terms of features, speed, and stability.<p><a href="https://github.com/bruin-data/bruin">https://github.com/bruin-data/bruin</a><p>I’d love to hear your feedback and learn more about how we can make data pipelines easier and better to work with, looking forward to your thoughts!<p>Best, Burak

Show HN: NCompass Technologies – yet another AI Inference API, but hear us out

Hello HackerNews!<p>I’m excited to share what we’ve been working on at nCompass Technologies: an AI inference* platform that gives you a scalable and reliable API to access any open-source AI model — with no rate limits. We don't have rate limits as optimizations we made to our AI model serving software enable us to support a high number of concurrent requests without degrading quality of service for you as a user.<p>If you’re thinking, well aren’t there a bunch of these already? So were we when we started nCompass. When using other APIs, we found that they weren’t reliable enough to be able to use open source models in production environments. To resolve this, we're building an AI inference engine that enable you, as an end user, to reliably use open source models in production.<p>Underlying this API, we’re building optimizations at the hosting, scheduling and kernel levels with the single goal of minimizing the number of GPUs required to maximize the number of concurrent requests you can serve, without degrading quality of service.<p>We’re still building a lot of our optimizations, but we’ve released what we have so far via our API. Compared to vLLM, we currently keep time-to-first-token (TTFT) 2-4x lower than vLLM at the equivalent concurrent request rate. You can check out a demo of our API here:<p><a href="https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3" rel="nofollow">https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3</a><p>As a result of the optimizations we’ve rolled out so far, we’re releasing a few unique features on our API:<p>1. Rate-Limits: we don’t have any<p>Most other API’s out there have strict rate limits and can be rather unreliable. We don’t want API’s for open source models to remain as a solution for prototypes only. We want people to use these APIs like they do OpenAI’s or Anthropic’s and actually make production grade products on top of open source models.<p>2. Underserved models: we have them<p>There are a ton of models out there, but not all of them are readily available for people to use if they don’t have access to GPUs. We envision our API becoming a system where anyone can launch any custom model of their choice with minimal cold starts and run the model as a simple API call. Our cold starts for any 8B or 70B model are only 40s and we’ll keep improving this.<p>Towards this goal, we already have models like `ai4bharat/hercule-hi` hosted on our API to support non-english language use cases and models like `Qwen/QwQ-32B-Preview` to support reasoning based use cases. You can find the other models that we host here: <a href="https://console.ncompass.tech/public-models">https://console.ncompass.tech/public-models</a> for public ones, and <a href="https://console.ncompass.tech/models">https://console.ncompass.tech/models</a> for private ones that work once you've created an account.<p>We’d love for you to try out our API by following the steps here: <a href="https://www.ncompass.tech/docs/llm_inference/quickstart">https://www.ncompass.tech/docs/llm_inference/quickstart</a>. We provide $100 of free credit on sign up to run models, and like we said, go crazy with your requests, we’d love to see if you can break our system :)<p>We’re still actively building out features and optimizations and your input can help shape the future of nCompass. If you have thoughts on our platform or want us to host a specific model, let us know at hello@ncompass.tech.<p>Happy Hacking!<p>* it's called inference because the process of taking a query, running it through the model and providing a result is referred to as "inference" in the AI / machine learning world. It's as opposed to "training" or "finetuning" which are processes used to actually develop the AI models that you then run "inference" on.

Show HN: NCompass Technologies – yet another AI Inference API, but hear us out

Hello HackerNews!<p>I’m excited to share what we’ve been working on at nCompass Technologies: an AI inference* platform that gives you a scalable and reliable API to access any open-source AI model — with no rate limits. We don't have rate limits as optimizations we made to our AI model serving software enable us to support a high number of concurrent requests without degrading quality of service for you as a user.<p>If you’re thinking, well aren’t there a bunch of these already? So were we when we started nCompass. When using other APIs, we found that they weren’t reliable enough to be able to use open source models in production environments. To resolve this, we're building an AI inference engine that enable you, as an end user, to reliably use open source models in production.<p>Underlying this API, we’re building optimizations at the hosting, scheduling and kernel levels with the single goal of minimizing the number of GPUs required to maximize the number of concurrent requests you can serve, without degrading quality of service.<p>We’re still building a lot of our optimizations, but we’ve released what we have so far via our API. Compared to vLLM, we currently keep time-to-first-token (TTFT) 2-4x lower than vLLM at the equivalent concurrent request rate. You can check out a demo of our API here:<p><a href="https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3" rel="nofollow">https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3</a><p>As a result of the optimizations we’ve rolled out so far, we’re releasing a few unique features on our API:<p>1. Rate-Limits: we don’t have any<p>Most other API’s out there have strict rate limits and can be rather unreliable. We don’t want API’s for open source models to remain as a solution for prototypes only. We want people to use these APIs like they do OpenAI’s or Anthropic’s and actually make production grade products on top of open source models.<p>2. Underserved models: we have them<p>There are a ton of models out there, but not all of them are readily available for people to use if they don’t have access to GPUs. We envision our API becoming a system where anyone can launch any custom model of their choice with minimal cold starts and run the model as a simple API call. Our cold starts for any 8B or 70B model are only 40s and we’ll keep improving this.<p>Towards this goal, we already have models like `ai4bharat/hercule-hi` hosted on our API to support non-english language use cases and models like `Qwen/QwQ-32B-Preview` to support reasoning based use cases. You can find the other models that we host here: <a href="https://console.ncompass.tech/public-models">https://console.ncompass.tech/public-models</a> for public ones, and <a href="https://console.ncompass.tech/models">https://console.ncompass.tech/models</a> for private ones that work once you've created an account.<p>We’d love for you to try out our API by following the steps here: <a href="https://www.ncompass.tech/docs/llm_inference/quickstart">https://www.ncompass.tech/docs/llm_inference/quickstart</a>. We provide $100 of free credit on sign up to run models, and like we said, go crazy with your requests, we’d love to see if you can break our system :)<p>We’re still actively building out features and optimizations and your input can help shape the future of nCompass. If you have thoughts on our platform or want us to host a specific model, let us know at hello@ncompass.tech.<p>Happy Hacking!<p>* it's called inference because the process of taking a query, running it through the model and providing a result is referred to as "inference" in the AI / machine learning world. It's as opposed to "training" or "finetuning" which are processes used to actually develop the AI models that you then run "inference" on.

Show HN: NCompass Technologies – yet another AI Inference API, but hear us out

Hello HackerNews!<p>I’m excited to share what we’ve been working on at nCompass Technologies: an AI inference* platform that gives you a scalable and reliable API to access any open-source AI model — with no rate limits. We don't have rate limits as optimizations we made to our AI model serving software enable us to support a high number of concurrent requests without degrading quality of service for you as a user.<p>If you’re thinking, well aren’t there a bunch of these already? So were we when we started nCompass. When using other APIs, we found that they weren’t reliable enough to be able to use open source models in production environments. To resolve this, we're building an AI inference engine that enable you, as an end user, to reliably use open source models in production.<p>Underlying this API, we’re building optimizations at the hosting, scheduling and kernel levels with the single goal of minimizing the number of GPUs required to maximize the number of concurrent requests you can serve, without degrading quality of service.<p>We’re still building a lot of our optimizations, but we’ve released what we have so far via our API. Compared to vLLM, we currently keep time-to-first-token (TTFT) 2-4x lower than vLLM at the equivalent concurrent request rate. You can check out a demo of our API here:<p><a href="https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3" rel="nofollow">https://www.loom.com/share/c92f825ac0af4ab18296a16546a75be3</a><p>As a result of the optimizations we’ve rolled out so far, we’re releasing a few unique features on our API:<p>1. Rate-Limits: we don’t have any<p>Most other API’s out there have strict rate limits and can be rather unreliable. We don’t want API’s for open source models to remain as a solution for prototypes only. We want people to use these APIs like they do OpenAI’s or Anthropic’s and actually make production grade products on top of open source models.<p>2. Underserved models: we have them<p>There are a ton of models out there, but not all of them are readily available for people to use if they don’t have access to GPUs. We envision our API becoming a system where anyone can launch any custom model of their choice with minimal cold starts and run the model as a simple API call. Our cold starts for any 8B or 70B model are only 40s and we’ll keep improving this.<p>Towards this goal, we already have models like `ai4bharat/hercule-hi` hosted on our API to support non-english language use cases and models like `Qwen/QwQ-32B-Preview` to support reasoning based use cases. You can find the other models that we host here: <a href="https://console.ncompass.tech/public-models">https://console.ncompass.tech/public-models</a> for public ones, and <a href="https://console.ncompass.tech/models">https://console.ncompass.tech/models</a> for private ones that work once you've created an account.<p>We’d love for you to try out our API by following the steps here: <a href="https://www.ncompass.tech/docs/llm_inference/quickstart">https://www.ncompass.tech/docs/llm_inference/quickstart</a>. We provide $100 of free credit on sign up to run models, and like we said, go crazy with your requests, we’d love to see if you can break our system :)<p>We’re still actively building out features and optimizations and your input can help shape the future of nCompass. If you have thoughts on our platform or want us to host a specific model, let us know at hello@ncompass.tech.<p>Happy Hacking!<p>* it's called inference because the process of taking a query, running it through the model and providing a result is referred to as "inference" in the AI / machine learning world. It's as opposed to "training" or "finetuning" which are processes used to actually develop the AI models that you then run "inference" on.

Show HN: Autonomous AI agents that monitor the stock market for you

We created autonomous AI Agents that monitor the stock market for you while you go about your day.<p>How it works: Tell our AI Assistant what you want to monitor, and it creates a project for our team of autonomous AI Agents. You'll get notifications (email + app) when significant events matching your criteria are detected. For short-term projects, you'll be notified when your analysis is ready.<p>Behind the scenes: When you give the AI Assistant a request to monitor an entity (like a stock or group of stocks), an AI Project Manager plans the project and breaks the project down into manageable tasks. These tasks run asynchronously - some recurring (hourly/daily/weekly/monthly/quarterly/yearly), others one-time.<p>Example prompts you can try: Long-term monitoring: - "Monitor Apple stock and notify me of any important events and red flags" - "Monitor Apple, Google, Microsoft, and Meta stock. Notify me if any of them start trending toward being undervalued"<p>Short-term analysis: - "Create a project to analyze the last 30 earnings calls for Tesla, spot trends, and how the business has evolved over time"<p>You can track the progress of all tasks as the AI Agents work in the background.<p>Try it here: <a href="https://decodeinvesting.com/chat" rel="nofollow">https://decodeinvesting.com/chat</a><p>This is still an early version - we're actively improving it based on feedback. Would love to hear what you think and what features you'd want to see next!<p>Previously shared our AI-powered Stock Market Research Analyst: <a href="https://news.ycombinator.com/item?id=41156478">https://news.ycombinator.com/item?id=41156478</a>

Show HN: Autonomous AI agents that monitor the stock market for you

We created autonomous AI Agents that monitor the stock market for you while you go about your day.<p>How it works: Tell our AI Assistant what you want to monitor, and it creates a project for our team of autonomous AI Agents. You'll get notifications (email + app) when significant events matching your criteria are detected. For short-term projects, you'll be notified when your analysis is ready.<p>Behind the scenes: When you give the AI Assistant a request to monitor an entity (like a stock or group of stocks), an AI Project Manager plans the project and breaks the project down into manageable tasks. These tasks run asynchronously - some recurring (hourly/daily/weekly/monthly/quarterly/yearly), others one-time.<p>Example prompts you can try: Long-term monitoring: - "Monitor Apple stock and notify me of any important events and red flags" - "Monitor Apple, Google, Microsoft, and Meta stock. Notify me if any of them start trending toward being undervalued"<p>Short-term analysis: - "Create a project to analyze the last 30 earnings calls for Tesla, spot trends, and how the business has evolved over time"<p>You can track the progress of all tasks as the AI Agents work in the background.<p>Try it here: <a href="https://decodeinvesting.com/chat" rel="nofollow">https://decodeinvesting.com/chat</a><p>This is still an early version - we're actively improving it based on feedback. Would love to hear what you think and what features you'd want to see next!<p>Previously shared our AI-powered Stock Market Research Analyst: <a href="https://news.ycombinator.com/item?id=41156478">https://news.ycombinator.com/item?id=41156478</a>

Show HN: Autonomous AI agents that monitor the stock market for you

We created autonomous AI Agents that monitor the stock market for you while you go about your day.<p>How it works: Tell our AI Assistant what you want to monitor, and it creates a project for our team of autonomous AI Agents. You'll get notifications (email + app) when significant events matching your criteria are detected. For short-term projects, you'll be notified when your analysis is ready.<p>Behind the scenes: When you give the AI Assistant a request to monitor an entity (like a stock or group of stocks), an AI Project Manager plans the project and breaks the project down into manageable tasks. These tasks run asynchronously - some recurring (hourly/daily/weekly/monthly/quarterly/yearly), others one-time.<p>Example prompts you can try: Long-term monitoring: - "Monitor Apple stock and notify me of any important events and red flags" - "Monitor Apple, Google, Microsoft, and Meta stock. Notify me if any of them start trending toward being undervalued"<p>Short-term analysis: - "Create a project to analyze the last 30 earnings calls for Tesla, spot trends, and how the business has evolved over time"<p>You can track the progress of all tasks as the AI Agents work in the background.<p>Try it here: <a href="https://decodeinvesting.com/chat" rel="nofollow">https://decodeinvesting.com/chat</a><p>This is still an early version - we're actively improving it based on feedback. Would love to hear what you think and what features you'd want to see next!<p>Previously shared our AI-powered Stock Market Research Analyst: <a href="https://news.ycombinator.com/item?id=41156478">https://news.ycombinator.com/item?id=41156478</a>

Show HN: I built an embeddable Unicode library with MISRA C conformance

Hello, everyone. I built Unicorn: an embeddable MISRA C:2012 implementation of essential Unicode Algorithms.<p>Unicorn is designed to be fully customizable: you can select which Unicode algorithms and character properties are included or excluded from compilation. You can also exclude Unicode character blocks wholesale for scripts your application does not support. It's perfect for resource constrained devices like microcontrollers and IoT devices.<p>About me: I quit my Big Corp job a few years back to pursue my passion for software development and this is one of my first commercial releases.

Show HN: I built an embeddable Unicode library with MISRA C conformance

Hello, everyone. I built Unicorn: an embeddable MISRA C:2012 implementation of essential Unicode Algorithms.<p>Unicorn is designed to be fully customizable: you can select which Unicode algorithms and character properties are included or excluded from compilation. You can also exclude Unicode character blocks wholesale for scripts your application does not support. It's perfect for resource constrained devices like microcontrollers and IoT devices.<p>About me: I quit my Big Corp job a few years back to pursue my passion for software development and this is one of my first commercial releases.

Show HN: I built an embeddable Unicode library with MISRA C conformance

Hello, everyone. I built Unicorn: an embeddable MISRA C:2012 implementation of essential Unicode Algorithms.<p>Unicorn is designed to be fully customizable: you can select which Unicode algorithms and character properties are included or excluded from compilation. You can also exclude Unicode character blocks wholesale for scripts your application does not support. It's perfect for resource constrained devices like microcontrollers and IoT devices.<p>About me: I quit my Big Corp job a few years back to pursue my passion for software development and this is one of my first commercial releases.

Show HN: SmartHome – An Adventure Game

SmartHome is a free, browser-based game written in vanilla JavaScript and no libraries. I don't want to spoil anything about the gameplay, but if you like text adventures, point-and-click adventure games, puzzle games, escape room games, art games, incremental games, cozy games, and/or RPGs, then this might be your speed.<p>If you find it too hard and don't mind some mild spoilers, then check out the hints page: <a href="https://smarthome.steviep.xyz/help" rel="nofollow">https://smarthome.steviep.xyz/help</a><p>Enjoy!

Show HN: SmartHome – An Adventure Game

SmartHome is a free, browser-based game written in vanilla JavaScript and no libraries. I don't want to spoil anything about the gameplay, but if you like text adventures, point-and-click adventure games, puzzle games, escape room games, art games, incremental games, cozy games, and/or RPGs, then this might be your speed.<p>If you find it too hard and don't mind some mild spoilers, then check out the hints page: <a href="https://smarthome.steviep.xyz/help" rel="nofollow">https://smarthome.steviep.xyz/help</a><p>Enjoy!

Show HN: SmartHome – An Adventure Game

SmartHome is a free, browser-based game written in vanilla JavaScript and no libraries. I don't want to spoil anything about the gameplay, but if you like text adventures, point-and-click adventure games, puzzle games, escape room games, art games, incremental games, cozy games, and/or RPGs, then this might be your speed.<p>If you find it too hard and don't mind some mild spoilers, then check out the hints page: <a href="https://smarthome.steviep.xyz/help" rel="nofollow">https://smarthome.steviep.xyz/help</a><p>Enjoy!

Show HN: Performant intracontinental public transport routing in Rust

I made a public transport route planning program that's capable of planning journeys across Europe or North America! There's only one other FOSS project I know of (MOTIS/Transitous) that can do transit routing at this scale, and in the testing I've performed mine is about 50x faster. I've spent a few weeks on this project now and it's getting to the point where I can show it off, but the API responses need a lot of work before they're usable for any downstream application.<p>Example query (Berlin to Barcelona): <a href="https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380458,2.1455451" rel="nofollow">https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380...</a><p>There are some bugs still. Notably, it's not capable of planning the return trip for this route, nor the reverse of the trip from Seattle to NYC that I gave in the blog post.<p>Blog post: <a href="https://blog.ellenhp.me/performant-intracontinental-transit-routing-in-rust" rel="nofollow">https://blog.ellenhp.me/performant-intracontinental-transit-...</a><p>Repo: <a href="https://github.com/ellenhp/farebox">https://github.com/ellenhp/farebox</a><p>Side-note but in the past some have criticized my writing style and it's been a bit hurtful at times but if you have <i>constructive</i> feedback on the blog post I'd appreciate it. I'm trying to get better at writing. :)

Show HN: Performant intracontinental public transport routing in Rust

I made a public transport route planning program that's capable of planning journeys across Europe or North America! There's only one other FOSS project I know of (MOTIS/Transitous) that can do transit routing at this scale, and in the testing I've performed mine is about 50x faster. I've spent a few weeks on this project now and it's getting to the point where I can show it off, but the API responses need a lot of work before they're usable for any downstream application.<p>Example query (Berlin to Barcelona): <a href="https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380458,2.1455451" rel="nofollow">https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380...</a><p>There are some bugs still. Notably, it's not capable of planning the return trip for this route, nor the reverse of the trip from Seattle to NYC that I gave in the blog post.<p>Blog post: <a href="https://blog.ellenhp.me/performant-intracontinental-transit-routing-in-rust" rel="nofollow">https://blog.ellenhp.me/performant-intracontinental-transit-...</a><p>Repo: <a href="https://github.com/ellenhp/farebox">https://github.com/ellenhp/farebox</a><p>Side-note but in the past some have criticized my writing style and it's been a bit hurtful at times but if you have <i>constructive</i> feedback on the blog post I'd appreciate it. I'm trying to get better at writing. :)

Show HN: Performant intracontinental public transport routing in Rust

I made a public transport route planning program that's capable of planning journeys across Europe or North America! There's only one other FOSS project I know of (MOTIS/Transitous) that can do transit routing at this scale, and in the testing I've performed mine is about 50x faster. I've spent a few weeks on this project now and it's getting to the point where I can show it off, but the API responses need a lot of work before they're usable for any downstream application.<p>Example query (Berlin to Barcelona): <a href="https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380458,2.1455451" rel="nofollow">https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380...</a><p>There are some bugs still. Notably, it's not capable of planning the return trip for this route, nor the reverse of the trip from Seattle to NYC that I gave in the blog post.<p>Blog post: <a href="https://blog.ellenhp.me/performant-intracontinental-transit-routing-in-rust" rel="nofollow">https://blog.ellenhp.me/performant-intracontinental-transit-...</a><p>Repo: <a href="https://github.com/ellenhp/farebox">https://github.com/ellenhp/farebox</a><p>Side-note but in the past some have criticized my writing style and it's been a bit hurtful at times but if you have <i>constructive</i> feedback on the blog post I'd appreciate it. I'm trying to get better at writing. :)

Show HN: Performant intracontinental public transport routing in Rust

I made a public transport route planning program that's capable of planning journeys across Europe or North America! There's only one other FOSS project I know of (MOTIS/Transitous) that can do transit routing at this scale, and in the testing I've performed mine is about 50x faster. I've spent a few weeks on this project now and it's getting to the point where I can show it off, but the API responses need a lot of work before they're usable for any downstream application.<p>Example query (Berlin to Barcelona): <a href="https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380458,2.1455451" rel="nofollow">https://farebox.airmail.rs/plan/52.5176122,13.4180261/41.380...</a><p>There are some bugs still. Notably, it's not capable of planning the return trip for this route, nor the reverse of the trip from Seattle to NYC that I gave in the blog post.<p>Blog post: <a href="https://blog.ellenhp.me/performant-intracontinental-transit-routing-in-rust" rel="nofollow">https://blog.ellenhp.me/performant-intracontinental-transit-...</a><p>Repo: <a href="https://github.com/ellenhp/farebox">https://github.com/ellenhp/farebox</a><p>Side-note but in the past some have criticized my writing style and it's been a bit hurtful at times but if you have <i>constructive</i> feedback on the blog post I'd appreciate it. I'm trying to get better at writing. :)

Show HN: A simple web game to help learn chords and basic progressions

Hi Hacker News,<p>I've created Chord Nebula, a simple web-based game designed to help users learn and practice piano chords, basic progressions, and harmony fundamentals. The game integrates with MIDI keyboards, allowing you to play chords in real-time and receive immediate feedback based on the key you choose.<p>GitHub Repository: <a href="https://github.com/yottanami/chord_nebula">https://github.com/yottanami/chord_nebula</a> Live Demo: <a href="https://chords.yottanami.com" rel="nofollow">https://chords.yottanami.com</a><p>Requirements: To use Chord Nebula, you'll need a MIDI keyboard connected to your computer.<p>Current Status: Chord Nebula is still a simple project. I'm committed to improving it based on user feedback and would greatly appreciate any support or contributions from the community.<p>Looking for Feedback and Collaborators: I'm eager to hear your thoughts on Chord Nebula! Whether it's suggestions for new features, improvements, or bug reports, your feedback is invaluable. Additionally, if you're interested in collaborating to enhance the game, feel free to reach out or contribute directly via GitHub.<p>Thanks for taking the time to check out Chord Nebula!

Show HN: A simple web game to help learn chords and basic progressions

Hi Hacker News,<p>I've created Chord Nebula, a simple web-based game designed to help users learn and practice piano chords, basic progressions, and harmony fundamentals. The game integrates with MIDI keyboards, allowing you to play chords in real-time and receive immediate feedback based on the key you choose.<p>GitHub Repository: <a href="https://github.com/yottanami/chord_nebula">https://github.com/yottanami/chord_nebula</a> Live Demo: <a href="https://chords.yottanami.com" rel="nofollow">https://chords.yottanami.com</a><p>Requirements: To use Chord Nebula, you'll need a MIDI keyboard connected to your computer.<p>Current Status: Chord Nebula is still a simple project. I'm committed to improving it based on user feedback and would greatly appreciate any support or contributions from the community.<p>Looking for Feedback and Collaborators: I'm eager to hear your thoughts on Chord Nebula! Whether it's suggestions for new features, improvements, or bug reports, your feedback is invaluable. Additionally, if you're interested in collaborating to enhance the game, feel free to reach out or contribute directly via GitHub.<p>Thanks for taking the time to check out Chord Nebula!

Show HN: A simple web game to help learn chords and basic progressions

Hi Hacker News,<p>I've created Chord Nebula, a simple web-based game designed to help users learn and practice piano chords, basic progressions, and harmony fundamentals. The game integrates with MIDI keyboards, allowing you to play chords in real-time and receive immediate feedback based on the key you choose.<p>GitHub Repository: <a href="https://github.com/yottanami/chord_nebula">https://github.com/yottanami/chord_nebula</a> Live Demo: <a href="https://chords.yottanami.com" rel="nofollow">https://chords.yottanami.com</a><p>Requirements: To use Chord Nebula, you'll need a MIDI keyboard connected to your computer.<p>Current Status: Chord Nebula is still a simple project. I'm committed to improving it based on user feedback and would greatly appreciate any support or contributions from the community.<p>Looking for Feedback and Collaborators: I'm eager to hear your thoughts on Chord Nebula! Whether it's suggestions for new features, improvements, or bug reports, your feedback is invaluable. Additionally, if you're interested in collaborating to enhance the game, feel free to reach out or contribute directly via GitHub.<p>Thanks for taking the time to check out Chord Nebula!

< 1 2 3 ... 99 100 101 102 103 ... 830 831 832 >