The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Halloy – Modern IRC client

I started working on Halloy back in 2022, with the goal of giving something back to the community I’ve been a part of for the past two decades. I wanted to create a modern, multi-platform IRC client written in Rust.<p>Three years later, I’ve made new friends who have become core contributors, and there are now over 200 people idling in our #halloy channel on Libera.<p>My hope is that this client will outlive me and that IRC will live on.

Show HN: Halloy – Modern IRC client

I started working on Halloy back in 2022, with the goal of giving something back to the community I’ve been a part of for the past two decades. I wanted to create a modern, multi-platform IRC client written in Rust.<p>Three years later, I’ve made new friends who have become core contributors, and there are now over 200 people idling in our #halloy channel on Libera.<p>My hope is that this client will outlive me and that IRC will live on.

Show HN: Metorial (YC F25) – Vercel for MCP

Hey HN! We're Wen and Tobias, and we're building Metorial (<a href="https://metorial.com">https://metorial.com</a>), an integration platform that connects AI agents to external tools and data using MCP.<p>The Problem: While MCP works great locally (e.g., Cursor or Claude Desktop), server-side deployments are painful. Running MCP servers means managing Docker configs, per-user OAuth flows, scaling concurrent sessions, and building observability from scratch. This infrastructure work turns simple integrations into weeks of setup.<p>Metorial handles all of this automatically. We maintain an open catalog of ~600 MCP servers (GitHub, Slack, Google Drive, Salesforce, databases, etc.) that you can deploy in three clicks. You can also bring your own MCP server or fork existing ones.<p>For OAuth, just provide your client ID and secret and we handle the entire flow, including token refresh. Each user then gets an isolated MCP server instance configured with their own OAuth credentials automatically.<p>What makes us different is that our serverless runtime hibernates idle MCP servers and resumes them with sub-second cold starts while preserving the state and connection. Our custom MCP engine is capable of managing thousands of concurrent connections, giving you a scalable service with per-user isolation. Other alternatives either run shared servers (security issues) or provision separate VMs per user (expensive and slow to scale).<p>Our Python and TypeScript SDKs let you connect LLMs to MCP tools in a single function call, abstracting away the protocol complexity. But if you want to dig deep, you can just use standard MCP and our REST API (<a href="https://metorial.com/api">https://metorial.com/api</a>) to connect to our platform.<p>You can self-host (<a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>) or use the managed version at <a href="https://metorial.com">https://metorial.com</a>.<p>So far, we see enterprise teams use Metorial to have a central integration hub for tools like Salesforce, while startups use it to cut weeks of infra work on their side when building AI agents with integrations.<p>Demo video: <a href="https://www.youtube.com/watch?v=07StSRNmJZ8" rel="nofollow">https://www.youtube.com/watch?v=07StSRNmJZ8</a><p>Our Repos: Metorial: <a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>, MCP Containers: <a href="https://github.com/metorial/mcp-containers" rel="nofollow">https://github.com/metorial/mcp-containers</a><p>SDKs: Node/TypeScript: <a href="https://github.com/metorial/metorial-node" rel="nofollow">https://github.com/metorial/metorial-node</a>, Python: <a href="https://github.com/metorial/metorial-python" rel="nofollow">https://github.com/metorial/metorial-python</a><p>We'd love to hear feedback, especially if you've dealt with deploying MCP at scale!

Show HN: Metorial (YC F25) – Vercel for MCP

Hey HN! We're Wen and Tobias, and we're building Metorial (<a href="https://metorial.com">https://metorial.com</a>), an integration platform that connects AI agents to external tools and data using MCP.<p>The Problem: While MCP works great locally (e.g., Cursor or Claude Desktop), server-side deployments are painful. Running MCP servers means managing Docker configs, per-user OAuth flows, scaling concurrent sessions, and building observability from scratch. This infrastructure work turns simple integrations into weeks of setup.<p>Metorial handles all of this automatically. We maintain an open catalog of ~600 MCP servers (GitHub, Slack, Google Drive, Salesforce, databases, etc.) that you can deploy in three clicks. You can also bring your own MCP server or fork existing ones.<p>For OAuth, just provide your client ID and secret and we handle the entire flow, including token refresh. Each user then gets an isolated MCP server instance configured with their own OAuth credentials automatically.<p>What makes us different is that our serverless runtime hibernates idle MCP servers and resumes them with sub-second cold starts while preserving the state and connection. Our custom MCP engine is capable of managing thousands of concurrent connections, giving you a scalable service with per-user isolation. Other alternatives either run shared servers (security issues) or provision separate VMs per user (expensive and slow to scale).<p>Our Python and TypeScript SDKs let you connect LLMs to MCP tools in a single function call, abstracting away the protocol complexity. But if you want to dig deep, you can just use standard MCP and our REST API (<a href="https://metorial.com/api">https://metorial.com/api</a>) to connect to our platform.<p>You can self-host (<a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>) or use the managed version at <a href="https://metorial.com">https://metorial.com</a>.<p>So far, we see enterprise teams use Metorial to have a central integration hub for tools like Salesforce, while startups use it to cut weeks of infra work on their side when building AI agents with integrations.<p>Demo video: <a href="https://www.youtube.com/watch?v=07StSRNmJZ8" rel="nofollow">https://www.youtube.com/watch?v=07StSRNmJZ8</a><p>Our Repos: Metorial: <a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>, MCP Containers: <a href="https://github.com/metorial/mcp-containers" rel="nofollow">https://github.com/metorial/mcp-containers</a><p>SDKs: Node/TypeScript: <a href="https://github.com/metorial/metorial-node" rel="nofollow">https://github.com/metorial/metorial-node</a>, Python: <a href="https://github.com/metorial/metorial-python" rel="nofollow">https://github.com/metorial/metorial-python</a><p>We'd love to hear feedback, especially if you've dealt with deploying MCP at scale!

Show HN: Metorial (YC F25) – Vercel for MCP

Hey HN! We're Wen and Tobias, and we're building Metorial (<a href="https://metorial.com">https://metorial.com</a>), an integration platform that connects AI agents to external tools and data using MCP.<p>The Problem: While MCP works great locally (e.g., Cursor or Claude Desktop), server-side deployments are painful. Running MCP servers means managing Docker configs, per-user OAuth flows, scaling concurrent sessions, and building observability from scratch. This infrastructure work turns simple integrations into weeks of setup.<p>Metorial handles all of this automatically. We maintain an open catalog of ~600 MCP servers (GitHub, Slack, Google Drive, Salesforce, databases, etc.) that you can deploy in three clicks. You can also bring your own MCP server or fork existing ones.<p>For OAuth, just provide your client ID and secret and we handle the entire flow, including token refresh. Each user then gets an isolated MCP server instance configured with their own OAuth credentials automatically.<p>What makes us different is that our serverless runtime hibernates idle MCP servers and resumes them with sub-second cold starts while preserving the state and connection. Our custom MCP engine is capable of managing thousands of concurrent connections, giving you a scalable service with per-user isolation. Other alternatives either run shared servers (security issues) or provision separate VMs per user (expensive and slow to scale).<p>Our Python and TypeScript SDKs let you connect LLMs to MCP tools in a single function call, abstracting away the protocol complexity. But if you want to dig deep, you can just use standard MCP and our REST API (<a href="https://metorial.com/api">https://metorial.com/api</a>) to connect to our platform.<p>You can self-host (<a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>) or use the managed version at <a href="https://metorial.com">https://metorial.com</a>.<p>So far, we see enterprise teams use Metorial to have a central integration hub for tools like Salesforce, while startups use it to cut weeks of infra work on their side when building AI agents with integrations.<p>Demo video: <a href="https://www.youtube.com/watch?v=07StSRNmJZ8" rel="nofollow">https://www.youtube.com/watch?v=07StSRNmJZ8</a><p>Our Repos: Metorial: <a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>, MCP Containers: <a href="https://github.com/metorial/mcp-containers" rel="nofollow">https://github.com/metorial/mcp-containers</a><p>SDKs: Node/TypeScript: <a href="https://github.com/metorial/metorial-node" rel="nofollow">https://github.com/metorial/metorial-node</a>, Python: <a href="https://github.com/metorial/metorial-python" rel="nofollow">https://github.com/metorial/metorial-python</a><p>We'd love to hear feedback, especially if you've dealt with deploying MCP at scale!

Show HN: Metorial (YC F25) – Vercel for MCP

Hey HN! We're Wen and Tobias, and we're building Metorial (<a href="https://metorial.com">https://metorial.com</a>), an integration platform that connects AI agents to external tools and data using MCP.<p>The Problem: While MCP works great locally (e.g., Cursor or Claude Desktop), server-side deployments are painful. Running MCP servers means managing Docker configs, per-user OAuth flows, scaling concurrent sessions, and building observability from scratch. This infrastructure work turns simple integrations into weeks of setup.<p>Metorial handles all of this automatically. We maintain an open catalog of ~600 MCP servers (GitHub, Slack, Google Drive, Salesforce, databases, etc.) that you can deploy in three clicks. You can also bring your own MCP server or fork existing ones.<p>For OAuth, just provide your client ID and secret and we handle the entire flow, including token refresh. Each user then gets an isolated MCP server instance configured with their own OAuth credentials automatically.<p>What makes us different is that our serverless runtime hibernates idle MCP servers and resumes them with sub-second cold starts while preserving the state and connection. Our custom MCP engine is capable of managing thousands of concurrent connections, giving you a scalable service with per-user isolation. Other alternatives either run shared servers (security issues) or provision separate VMs per user (expensive and slow to scale).<p>Our Python and TypeScript SDKs let you connect LLMs to MCP tools in a single function call, abstracting away the protocol complexity. But if you want to dig deep, you can just use standard MCP and our REST API (<a href="https://metorial.com/api">https://metorial.com/api</a>) to connect to our platform.<p>You can self-host (<a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>) or use the managed version at <a href="https://metorial.com">https://metorial.com</a>.<p>So far, we see enterprise teams use Metorial to have a central integration hub for tools like Salesforce, while startups use it to cut weeks of infra work on their side when building AI agents with integrations.<p>Demo video: <a href="https://www.youtube.com/watch?v=07StSRNmJZ8" rel="nofollow">https://www.youtube.com/watch?v=07StSRNmJZ8</a><p>Our Repos: Metorial: <a href="https://github.com/metorial/metorial" rel="nofollow">https://github.com/metorial/metorial</a>, MCP Containers: <a href="https://github.com/metorial/mcp-containers" rel="nofollow">https://github.com/metorial/mcp-containers</a><p>SDKs: Node/TypeScript: <a href="https://github.com/metorial/metorial-node" rel="nofollow">https://github.com/metorial/metorial-node</a>, Python: <a href="https://github.com/metorial/metorial-python" rel="nofollow">https://github.com/metorial/metorial-python</a><p>We'd love to hear feedback, especially if you've dealt with deploying MCP at scale!

Show HN: CSS Extras

Show HN: CSS Extras

Show HN: CSS Extras

Show HN: I extracted BASIC listings for Tim Hartnell's 1986 book

Tim Hartnell was one of the most prolific authors during the early days of the home computing boom, writing many popular books covering genres of games on different platforms and, in this case, artificial intelligence.<p>I've extracted the BASIC program listings from Hartnell's 1986 book 'Exploring Artificial Intelligence on Your IBM PC' and organized them along with a PC-BASIC runtime environment and instructions so you can try these programs out yourself.<p>Even though the AI landscape has changed enormously since Hartnell first wrote this book, I hope one or two of you will get some value out of these program listings if you're interested in exploring the fundamentals of AI on home-computing platforms as they were in the 1980's.<p>Tim Hartnell unfortunately passed away in 1991 at the young age of 40, and without his writing I imagine more than a few of us would not have found the start in computing we did. Thanks Tim.

Show HN: Aidlab – Health Data for Devs

Hey HN! I'm Jakub, and together with my co-founders Agnieszka and Nathan, we built Aidlab, a wearable that gives developers gold-standard physiological data.<p>Unlike health trackers with locked-down APIs, Aidlab ships with a free SDK [1] across 6+ platforms so you can just <i>pip install aidlabsdk</i> or <i>flutter pub add aidlab_sdk</i> or whatever platform (even Unity), and start streaming raw health data and events in real time with simple <i>didReceive*(timestamp, value)</i> callbacks.<p>Currently, we are exposing 13 data types including raw ECG, cough/snoring, motion, raw respiration, skin temperature, bodyweight reps, body position, and 20 high-level stats like stress or readiness through the API.<p>The most common questions I got are:<p>1) "how is it better than my smartwatch?"<p>2) "why we built it?"<p>Chest-mounted wearables are considered the gold standard for physiological measurements. For example, whenever Apple validates their watch, they benchmark against chest straps [2], as some signals can only be reliably measured (or measured at all!) near the heart including continuous ECG, true respiration (based on lung volume changes) or body position/orientation.<p>As for the second question: the problem for us was that smartwatches were too simple and the data too inaccurate, while advanced medical devices were too pricey or too complicated. We found a sweet spot between accuracy and accessibility - Aidlab delivers medical-grade signals without the hospital-level complexity. As "medical-grade" is a bold statement, we’ve published validation papers comparing Aidlab’s performance with other certified medical devices [3].<p>Today Aidlab is already a pretty mature concept. We've been building Aidlab for 2 years, shipped our first version in 2020, we got our first clients including Boeing/Jeppesen (monitoring pilots’ bio-signals during tests&training).<p>Now we're about to release Aidlab 2 [4] - with additional signals like EDA and GPS, and a bunch of new features, including on-device ML (we've trained a few small LSTM models running inference with TensorFlow Lite for Micro). The cool part is that we've built a custom shell on top of FreeRTOS, letting anyone invoke POSIX-like commands directly on the device, for example:<p><i>timeout 10 temperature --sampling-rate 1 | tee /data/temperature.csv | tail -n 5</i><p>The biggest breakthrough for us was realizing that cloud-based processing was the wrong approach. In the beginning, we pushed most of the computation to the cloud - it seemed natural, but turned out to be slow, costly, and devs didn't want it ("hey, is there a way to use your product without cloud?"). For example, our ECG analysis pipeline used to send raw data to an external microservice, processing it in 30-minute chunks through Bull queues. A 24-hour Holter analysis could spawn 100k+ event objects and take significant time to complete. Now we're doing everything we can to move computation to the edge. In an ideal world, the cloud wouldn't store or process anything - just receive already-analyzed, privacy-preserving results straight from the device.<p>Another lesson: don't hand-solder prototypes at 3 a.m. to save money -> please pay professionals to assemble PCBs.<p>We decided to showcase this now for three reasons:<p>- health feels more relevant than ever with the rise of longevity research and biohacking,<p>- we are close to finalizing Aidlab 2,<p>- and I am super curious to see if anyone here finds it useful!<p>If you'd like to check the quality of Aidlab for yourself, we are publishing free datasets every week during different activities [5].<p>[1] <a href="https://github.com/Aidlab" rel="nofollow">https://github.com/Aidlab</a><p>[2] <a href="https://www.apple.com/health/pdf/Heart_Rate_Calorimetry_Activity_on_Apple_Watch_November_2024.pdf" rel="nofollow">https://www.apple.com/health/pdf/Heart_Rate_Calorimetry_Acti...</a><p>[3] <a href="https://aidlab.com/validation" rel="nofollow">https://aidlab.com/validation</a><p>[4] <a href="https://aidlab.com/aidlab-2" rel="nofollow">https://aidlab.com/aidlab-2</a><p>[5] <a href="https://aidlab.com/datasets" rel="nofollow">https://aidlab.com/datasets</a>

Show HN: Aidlab – Health Data for Devs

Hey HN! I'm Jakub, and together with my co-founders Agnieszka and Nathan, we built Aidlab, a wearable that gives developers gold-standard physiological data.<p>Unlike health trackers with locked-down APIs, Aidlab ships with a free SDK [1] across 6+ platforms so you can just <i>pip install aidlabsdk</i> or <i>flutter pub add aidlab_sdk</i> or whatever platform (even Unity), and start streaming raw health data and events in real time with simple <i>didReceive*(timestamp, value)</i> callbacks.<p>Currently, we are exposing 13 data types including raw ECG, cough/snoring, motion, raw respiration, skin temperature, bodyweight reps, body position, and 20 high-level stats like stress or readiness through the API.<p>The most common questions I got are:<p>1) "how is it better than my smartwatch?"<p>2) "why we built it?"<p>Chest-mounted wearables are considered the gold standard for physiological measurements. For example, whenever Apple validates their watch, they benchmark against chest straps [2], as some signals can only be reliably measured (or measured at all!) near the heart including continuous ECG, true respiration (based on lung volume changes) or body position/orientation.<p>As for the second question: the problem for us was that smartwatches were too simple and the data too inaccurate, while advanced medical devices were too pricey or too complicated. We found a sweet spot between accuracy and accessibility - Aidlab delivers medical-grade signals without the hospital-level complexity. As "medical-grade" is a bold statement, we’ve published validation papers comparing Aidlab’s performance with other certified medical devices [3].<p>Today Aidlab is already a pretty mature concept. We've been building Aidlab for 2 years, shipped our first version in 2020, we got our first clients including Boeing/Jeppesen (monitoring pilots’ bio-signals during tests&training).<p>Now we're about to release Aidlab 2 [4] - with additional signals like EDA and GPS, and a bunch of new features, including on-device ML (we've trained a few small LSTM models running inference with TensorFlow Lite for Micro). The cool part is that we've built a custom shell on top of FreeRTOS, letting anyone invoke POSIX-like commands directly on the device, for example:<p><i>timeout 10 temperature --sampling-rate 1 | tee /data/temperature.csv | tail -n 5</i><p>The biggest breakthrough for us was realizing that cloud-based processing was the wrong approach. In the beginning, we pushed most of the computation to the cloud - it seemed natural, but turned out to be slow, costly, and devs didn't want it ("hey, is there a way to use your product without cloud?"). For example, our ECG analysis pipeline used to send raw data to an external microservice, processing it in 30-minute chunks through Bull queues. A 24-hour Holter analysis could spawn 100k+ event objects and take significant time to complete. Now we're doing everything we can to move computation to the edge. In an ideal world, the cloud wouldn't store or process anything - just receive already-analyzed, privacy-preserving results straight from the device.<p>Another lesson: don't hand-solder prototypes at 3 a.m. to save money -> please pay professionals to assemble PCBs.<p>We decided to showcase this now for three reasons:<p>- health feels more relevant than ever with the rise of longevity research and biohacking,<p>- we are close to finalizing Aidlab 2,<p>- and I am super curious to see if anyone here finds it useful!<p>If you'd like to check the quality of Aidlab for yourself, we are publishing free datasets every week during different activities [5].<p>[1] <a href="https://github.com/Aidlab" rel="nofollow">https://github.com/Aidlab</a><p>[2] <a href="https://www.apple.com/health/pdf/Heart_Rate_Calorimetry_Activity_on_Apple_Watch_November_2024.pdf" rel="nofollow">https://www.apple.com/health/pdf/Heart_Rate_Calorimetry_Acti...</a><p>[3] <a href="https://aidlab.com/validation" rel="nofollow">https://aidlab.com/validation</a><p>[4] <a href="https://aidlab.com/aidlab-2" rel="nofollow">https://aidlab.com/aidlab-2</a><p>[5] <a href="https://aidlab.com/datasets" rel="nofollow">https://aidlab.com/datasets</a>

Show HN: Baby's first international landline

Hi HN,<p>As a weekend project, I hacked together a physical phone, a Raspberry Pi running Asterisk and Twilio, to let toddlers safely make international calls.<p>I’ve documented the setup in this write-up and published the code + Ansible playbooks on GitHub so others can replicate it.<p>I built this so kids of expats can easily stay in touch with family on other continents.<p>Would love feedback from anyone who’s worked on something similar or tries building this themselves!<p>writeup: <a href="https://wip.tf/posts/telefonefix-building-babys-first-international-landline/" rel="nofollow">https://wip.tf/posts/telefonefix-building-babys-first-intern...</a> github repos: - <a href="https://github.com/nbr23/ansible-role-telefonefix" rel="nofollow">https://github.com/nbr23/ansible-role-telefonefix</a> - <a href="https://github.com/nbr23/allo-wed" rel="nofollow">https://github.com/nbr23/allo-wed</a>

Show HN: Baby's first international landline

Hi HN,<p>As a weekend project, I hacked together a physical phone, a Raspberry Pi running Asterisk and Twilio, to let toddlers safely make international calls.<p>I’ve documented the setup in this write-up and published the code + Ansible playbooks on GitHub so others can replicate it.<p>I built this so kids of expats can easily stay in touch with family on other continents.<p>Would love feedback from anyone who’s worked on something similar or tries building this themselves!<p>writeup: <a href="https://wip.tf/posts/telefonefix-building-babys-first-international-landline/" rel="nofollow">https://wip.tf/posts/telefonefix-building-babys-first-intern...</a> github repos: - <a href="https://github.com/nbr23/ansible-role-telefonefix" rel="nofollow">https://github.com/nbr23/ansible-role-telefonefix</a> - <a href="https://github.com/nbr23/allo-wed" rel="nofollow">https://github.com/nbr23/allo-wed</a>

Show HN: AI toy I worked on is in stores

Alt link: <a href="https://mrchristmas.com/products/santas-magical-telephone" rel="nofollow">https://mrchristmas.com/products/santas-magical-telephone</a><p>Video demo: <a href="https://www.youtube.com/watch?v=0z7QJxZWFQg" rel="nofollow">https://www.youtube.com/watch?v=0z7QJxZWFQg</a><p>The first time I talked with AI santa and it responded with a joke I was HOOKED. The fun/nonsense doesn't click until you try it yourself. What's even more exciting is you can build it yourself:<p>libpeer: <a href="https://github.com/sepfy/libpeer" rel="nofollow">https://github.com/sepfy/libpeer</a><p>pion: <a href="https://github.com/pion/webrtc" rel="nofollow">https://github.com/pion/webrtc</a><p>Then go do all your fun logic in your Pion server. Connect to any Voice AI provider, or roll your own via Open Source. Anything is possible.<p>If you have questions or hit any roadblocks I would love to help you. I have lots of hardware snippets on my GitHub: <a href="https://github.com/sean-der" rel="nofollow">https://github.com/sean-der</a>.

Show HN: AI toy I worked on is in stores

Alt link: <a href="https://mrchristmas.com/products/santas-magical-telephone" rel="nofollow">https://mrchristmas.com/products/santas-magical-telephone</a><p>Video demo: <a href="https://www.youtube.com/watch?v=0z7QJxZWFQg" rel="nofollow">https://www.youtube.com/watch?v=0z7QJxZWFQg</a><p>The first time I talked with AI santa and it responded with a joke I was HOOKED. The fun/nonsense doesn't click until you try it yourself. What's even more exciting is you can build it yourself:<p>libpeer: <a href="https://github.com/sepfy/libpeer" rel="nofollow">https://github.com/sepfy/libpeer</a><p>pion: <a href="https://github.com/pion/webrtc" rel="nofollow">https://github.com/pion/webrtc</a><p>Then go do all your fun logic in your Pion server. Connect to any Voice AI provider, or roll your own via Open Source. Anything is possible.<p>If you have questions or hit any roadblocks I would love to help you. I have lots of hardware snippets on my GitHub: <a href="https://github.com/sean-der" rel="nofollow">https://github.com/sean-der</a>.

Show HN: AI toy I worked on is in stores

Alt link: <a href="https://mrchristmas.com/products/santas-magical-telephone" rel="nofollow">https://mrchristmas.com/products/santas-magical-telephone</a><p>Video demo: <a href="https://www.youtube.com/watch?v=0z7QJxZWFQg" rel="nofollow">https://www.youtube.com/watch?v=0z7QJxZWFQg</a><p>The first time I talked with AI santa and it responded with a joke I was HOOKED. The fun/nonsense doesn't click until you try it yourself. What's even more exciting is you can build it yourself:<p>libpeer: <a href="https://github.com/sepfy/libpeer" rel="nofollow">https://github.com/sepfy/libpeer</a><p>pion: <a href="https://github.com/pion/webrtc" rel="nofollow">https://github.com/pion/webrtc</a><p>Then go do all your fun logic in your Pion server. Connect to any Voice AI provider, or roll your own via Open Source. Anything is possible.<p>If you have questions or hit any roadblocks I would love to help you. I have lots of hardware snippets on my GitHub: <a href="https://github.com/sean-der" rel="nofollow">https://github.com/sean-der</a>.

Show HN: SQLite Online – 11 years of solo development, 11K daily users

Show HN: SQLite Online – 11 years of solo development, 11K daily users

Show HN: SQLite Online – 11 years of solo development, 11K daily users

< 1 2 3 ... 20 21 22 23 24 ... 901 902 903 >