The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Image background removal without annoying subscriptions

Hi HN,<p>Removing the background from images is a surprisingly common image processing task, and AI has made it really easy. The technology has come a long way since segment leader remove.bg launched here on hn in Dec 2018 [1]. Chasing remove.bg's success, a legion of providers have come on the market offering varying levels of quality & service.<p>Despite there being a large number of competing services, most still price for very high (~95%?) gross margins. Furthermore, subscriptions make the effective unit price a lot higher than the list price for infrequent users, and requires effort & attention to ensure you're getting value for money. This has prevented a host of use cases (e.g. infrequent professional / hobbyist) and business models (e.g. ad-supported websites & mobile apps).<p>We see this as an opportunity where we can jump to the market's logical conclusion to gain market share and build goodwill: cost-plus PAYGO pricing, i.e. the "S3 pricing model".<p>So we've built yet-another image background removal service ( <a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a> - introductory post 6 months ago [2], a ton has been improved since then) but with a couple of twists:<p>1. Quantified quality comparison (90-120% of remove.bg, depending on image category), free for you to check your own images so you can make an informed choice.<p>2. Customer-friendly pricing (PAYGO @ 1-10% of competitors' subscriptions) with a generous free tier (and free while in beta).<p>3. A novel API result format: Delta PNG [3], which offers excellent latency & bandwidth savings. Especially useful for mobile apps.<p>4. Operational transparency: actual volume & latency metrics public, with more coming soon (all API providers should be showing this).<p>There's of course more to it than just price and we see several sources of differentiation in this market: quality, price, capability, reliability, latency, and goodwill.<p>As a new entrant we're looking to meet-or-beat the quality bar; beat on price, capability, reliability and latency; and to build up goodwill over time.<p>Our goal is to make it a no-brainer for new accounts to choose us, and to provide the tools and guidance necessary for existing accounts to make the switch with confidence.<p>We'd love for you to try it out and to hear your thoughts!<p><a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a><p>[1] <a href="https://news.ycombinator.com/item?id=18697601" rel="nofollow">https://news.ycombinator.com/item?id=18697601</a> [2] <a href="https://news.ycombinator.com/item?id=33439405" rel="nofollow">https://news.ycombinator.com/item?id=33439405</a> [3] <a href="https://pixian.ai/api/deltaPng" rel="nofollow">https://pixian.ai/api/deltaPng</a>

Show HN: Image background removal without annoying subscriptions

Hi HN,<p>Removing the background from images is a surprisingly common image processing task, and AI has made it really easy. The technology has come a long way since segment leader remove.bg launched here on hn in Dec 2018 [1]. Chasing remove.bg's success, a legion of providers have come on the market offering varying levels of quality & service.<p>Despite there being a large number of competing services, most still price for very high (~95%?) gross margins. Furthermore, subscriptions make the effective unit price a lot higher than the list price for infrequent users, and requires effort & attention to ensure you're getting value for money. This has prevented a host of use cases (e.g. infrequent professional / hobbyist) and business models (e.g. ad-supported websites & mobile apps).<p>We see this as an opportunity where we can jump to the market's logical conclusion to gain market share and build goodwill: cost-plus PAYGO pricing, i.e. the "S3 pricing model".<p>So we've built yet-another image background removal service ( <a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a> - introductory post 6 months ago [2], a ton has been improved since then) but with a couple of twists:<p>1. Quantified quality comparison (90-120% of remove.bg, depending on image category), free for you to check your own images so you can make an informed choice.<p>2. Customer-friendly pricing (PAYGO @ 1-10% of competitors' subscriptions) with a generous free tier (and free while in beta).<p>3. A novel API result format: Delta PNG [3], which offers excellent latency & bandwidth savings. Especially useful for mobile apps.<p>4. Operational transparency: actual volume & latency metrics public, with more coming soon (all API providers should be showing this).<p>There's of course more to it than just price and we see several sources of differentiation in this market: quality, price, capability, reliability, latency, and goodwill.<p>As a new entrant we're looking to meet-or-beat the quality bar; beat on price, capability, reliability and latency; and to build up goodwill over time.<p>Our goal is to make it a no-brainer for new accounts to choose us, and to provide the tools and guidance necessary for existing accounts to make the switch with confidence.<p>We'd love for you to try it out and to hear your thoughts!<p><a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a><p>[1] <a href="https://news.ycombinator.com/item?id=18697601" rel="nofollow">https://news.ycombinator.com/item?id=18697601</a> [2] <a href="https://news.ycombinator.com/item?id=33439405" rel="nofollow">https://news.ycombinator.com/item?id=33439405</a> [3] <a href="https://pixian.ai/api/deltaPng" rel="nofollow">https://pixian.ai/api/deltaPng</a>

Show HN: Image background removal without annoying subscriptions

Hi HN,<p>Removing the background from images is a surprisingly common image processing task, and AI has made it really easy. The technology has come a long way since segment leader remove.bg launched here on hn in Dec 2018 [1]. Chasing remove.bg's success, a legion of providers have come on the market offering varying levels of quality & service.<p>Despite there being a large number of competing services, most still price for very high (~95%?) gross margins. Furthermore, subscriptions make the effective unit price a lot higher than the list price for infrequent users, and requires effort & attention to ensure you're getting value for money. This has prevented a host of use cases (e.g. infrequent professional / hobbyist) and business models (e.g. ad-supported websites & mobile apps).<p>We see this as an opportunity where we can jump to the market's logical conclusion to gain market share and build goodwill: cost-plus PAYGO pricing, i.e. the "S3 pricing model".<p>So we've built yet-another image background removal service ( <a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a> - introductory post 6 months ago [2], a ton has been improved since then) but with a couple of twists:<p>1. Quantified quality comparison (90-120% of remove.bg, depending on image category), free for you to check your own images so you can make an informed choice.<p>2. Customer-friendly pricing (PAYGO @ 1-10% of competitors' subscriptions) with a generous free tier (and free while in beta).<p>3. A novel API result format: Delta PNG [3], which offers excellent latency & bandwidth savings. Especially useful for mobile apps.<p>4. Operational transparency: actual volume & latency metrics public, with more coming soon (all API providers should be showing this).<p>There's of course more to it than just price and we see several sources of differentiation in this market: quality, price, capability, reliability, latency, and goodwill.<p>As a new entrant we're looking to meet-or-beat the quality bar; beat on price, capability, reliability and latency; and to build up goodwill over time.<p>Our goal is to make it a no-brainer for new accounts to choose us, and to provide the tools and guidance necessary for existing accounts to make the switch with confidence.<p>We'd love for you to try it out and to hear your thoughts!<p><a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a><p>[1] <a href="https://news.ycombinator.com/item?id=18697601" rel="nofollow">https://news.ycombinator.com/item?id=18697601</a> [2] <a href="https://news.ycombinator.com/item?id=33439405" rel="nofollow">https://news.ycombinator.com/item?id=33439405</a> [3] <a href="https://pixian.ai/api/deltaPng" rel="nofollow">https://pixian.ai/api/deltaPng</a>

Show HN: Image background removal without annoying subscriptions

Hi HN,<p>Removing the background from images is a surprisingly common image processing task, and AI has made it really easy. The technology has come a long way since segment leader remove.bg launched here on hn in Dec 2018 [1]. Chasing remove.bg's success, a legion of providers have come on the market offering varying levels of quality & service.<p>Despite there being a large number of competing services, most still price for very high (~95%?) gross margins. Furthermore, subscriptions make the effective unit price a lot higher than the list price for infrequent users, and requires effort & attention to ensure you're getting value for money. This has prevented a host of use cases (e.g. infrequent professional / hobbyist) and business models (e.g. ad-supported websites & mobile apps).<p>We see this as an opportunity where we can jump to the market's logical conclusion to gain market share and build goodwill: cost-plus PAYGO pricing, i.e. the "S3 pricing model".<p>So we've built yet-another image background removal service ( <a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a> - introductory post 6 months ago [2], a ton has been improved since then) but with a couple of twists:<p>1. Quantified quality comparison (90-120% of remove.bg, depending on image category), free for you to check your own images so you can make an informed choice.<p>2. Customer-friendly pricing (PAYGO @ 1-10% of competitors' subscriptions) with a generous free tier (and free while in beta).<p>3. A novel API result format: Delta PNG [3], which offers excellent latency & bandwidth savings. Especially useful for mobile apps.<p>4. Operational transparency: actual volume & latency metrics public, with more coming soon (all API providers should be showing this).<p>There's of course more to it than just price and we see several sources of differentiation in this market: quality, price, capability, reliability, latency, and goodwill.<p>As a new entrant we're looking to meet-or-beat the quality bar; beat on price, capability, reliability and latency; and to build up goodwill over time.<p>Our goal is to make it a no-brainer for new accounts to choose us, and to provide the tools and guidance necessary for existing accounts to make the switch with confidence.<p>We'd love for you to try it out and to hear your thoughts!<p><a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a><p>[1] <a href="https://news.ycombinator.com/item?id=18697601" rel="nofollow">https://news.ycombinator.com/item?id=18697601</a> [2] <a href="https://news.ycombinator.com/item?id=33439405" rel="nofollow">https://news.ycombinator.com/item?id=33439405</a> [3] <a href="https://pixian.ai/api/deltaPng" rel="nofollow">https://pixian.ai/api/deltaPng</a>

Show HN: Image background removal without annoying subscriptions

Hi HN,<p>Removing the background from images is a surprisingly common image processing task, and AI has made it really easy. The technology has come a long way since segment leader remove.bg launched here on hn in Dec 2018 [1]. Chasing remove.bg's success, a legion of providers have come on the market offering varying levels of quality & service.<p>Despite there being a large number of competing services, most still price for very high (~95%?) gross margins. Furthermore, subscriptions make the effective unit price a lot higher than the list price for infrequent users, and requires effort & attention to ensure you're getting value for money. This has prevented a host of use cases (e.g. infrequent professional / hobbyist) and business models (e.g. ad-supported websites & mobile apps).<p>We see this as an opportunity where we can jump to the market's logical conclusion to gain market share and build goodwill: cost-plus PAYGO pricing, i.e. the "S3 pricing model".<p>So we've built yet-another image background removal service ( <a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a> - introductory post 6 months ago [2], a ton has been improved since then) but with a couple of twists:<p>1. Quantified quality comparison (90-120% of remove.bg, depending on image category), free for you to check your own images so you can make an informed choice.<p>2. Customer-friendly pricing (PAYGO @ 1-10% of competitors' subscriptions) with a generous free tier (and free while in beta).<p>3. A novel API result format: Delta PNG [3], which offers excellent latency & bandwidth savings. Especially useful for mobile apps.<p>4. Operational transparency: actual volume & latency metrics public, with more coming soon (all API providers should be showing this).<p>There's of course more to it than just price and we see several sources of differentiation in this market: quality, price, capability, reliability, latency, and goodwill.<p>As a new entrant we're looking to meet-or-beat the quality bar; beat on price, capability, reliability and latency; and to build up goodwill over time.<p>Our goal is to make it a no-brainer for new accounts to choose us, and to provide the tools and guidance necessary for existing accounts to make the switch with confidence.<p>We'd love for you to try it out and to hear your thoughts!<p><a href="https://pixian.ai" rel="nofollow">https://pixian.ai</a><p>[1] <a href="https://news.ycombinator.com/item?id=18697601" rel="nofollow">https://news.ycombinator.com/item?id=18697601</a> [2] <a href="https://news.ycombinator.com/item?id=33439405" rel="nofollow">https://news.ycombinator.com/item?id=33439405</a> [3] <a href="https://pixian.ai/api/deltaPng" rel="nofollow">https://pixian.ai/api/deltaPng</a>

Show HN: Accelerated Docker builds on your local machine with Depot (YC W23)

Hello HN! We just launched a new feature we built at Depot that accelerates Docker image builds on your local machine in a team environment, and we wanted to share some of the details with you all.<p>The launch blog post: https://depot.dev/blog/local-builds<p>Depot is a hosted container build service - we run fully managed Intel and Arm remote build machines in AWS, with large instance sizes and SSD cache disks. The machines run BuildKit, the build engine that powers Docker, so generally anything you can `docker build`, you can also `depot build`.<p>Most people use Depot in CI, and you could also run `depot build` from your local machine as well. That would perform the build using the remote builder, with associated fast hardware and extra fast datacenter network speeds.<p>But then to download the container back to your local machine, BuildKit would transfer the <i>entire</i> container back for every build, including base image layers, since BuildKit wasn’t aware of what layers already existed on your device.<p>The new release fixes this! To make it work, we replaced the BuildKit `--load` by making the Depot CLI itself serve the Docker registry API on a local port, then asking Docker to pull the image from that localhost registry. The CLI in turn intercepts the requests for layers and fetches them directly using BuildKit’s content API.<p>This means Docker only asks for the layers it needs! This actually speeds up both local builds, where you only need to download changed layers, as well as CI where it can skip building an expensive tarball of the whole image every time!<p>We ran into one major obstacle when first testing: the machine running the Docker daemon might not be the same machine running the `depot build` command. Notably, CircleCI has a remote Docker daemon, where asking it to pull from localhost does not reach the CLI’s temporary registry.<p>For this, we built a "helper" container that the CLI launches to run the HTTP server portion of the temporary registry - since it’s launched as a container, it does run on the same machine as the Docker daemon, and localhost is reachable. The Depot CLI then communicates with the helper container over stdio, receiving requests for layers and sending their contents back using a custom simple transport protocol.<p>This makes everything very efficient! One cool part about the remote build machines: you can share cache with anyone on your team who has access to the same project. This means that if your teammate already built all or part of the container, your build just reuses the result. This means that, in addition to using the fast remote builders instead of your local device, you can actually have cache hits on code you haven’t personally built yet.<p>We’d love for you to check it out, and are happy to answer any questions you have about technical details!<p>https://depot.dev/docs/guides/local-development

Show HN: File-by-file AI-generated comments for your codebase

My friends and I were complaining about having to decipher incomprehensible code one day and decided to pass the code through GPT to see if it could write easily understandable comments to help us out. It turns out that GPT can but it was still a hassle to generate comments for large files.<p>So we decided to develop a basic web application that automatically integrates with your Github repository, generate comments, create a pull request and send you an email when it is all done.<p>There is definitely a lot more that can be done but we wanted to gain feedback on whether this is a problem that you face too. Do you often find it challenging to understand complex code? Do you have difficulties in writing informative comments? And if so, would you find value in a tool that can automatically generate comments for your code?<p>Really appreciate any feedback and suggestions! Thanks in advance!

Show HN: File-by-file AI-generated comments for your codebase

My friends and I were complaining about having to decipher incomprehensible code one day and decided to pass the code through GPT to see if it could write easily understandable comments to help us out. It turns out that GPT can but it was still a hassle to generate comments for large files.<p>So we decided to develop a basic web application that automatically integrates with your Github repository, generate comments, create a pull request and send you an email when it is all done.<p>There is definitely a lot more that can be done but we wanted to gain feedback on whether this is a problem that you face too. Do you often find it challenging to understand complex code? Do you have difficulties in writing informative comments? And if so, would you find value in a tool that can automatically generate comments for your code?<p>Really appreciate any feedback and suggestions! Thanks in advance!

Show HN: DB-GPT, an LLM tool for database

Show HN: DB-GPT, an LLM tool for database

Show HN: Blotter – An interactive, never ending music video

One day I was listening to a playlist and wished there could be some cool visuals to go along with it.<p>Blotter is a proof of concept I hacked together that does a bit of audio recognition combined with a few generative AI models (both text and img) to create visuals that are relevant to the song.<p>The video stream is generated in real time at 24fps - you can try it yourself by requesting visuals in the Twitch chat using the "!v" command!<p>Right now it's mostly a fun hack project, but I am tinkering with new model architectures for higher fidelity video as well as an interactive tool so people can make videos with their own audio files.<p>I'd love to hear any feedback or suggestions, thanks!

Show HN: Blotter – An interactive, never ending music video

One day I was listening to a playlist and wished there could be some cool visuals to go along with it.<p>Blotter is a proof of concept I hacked together that does a bit of audio recognition combined with a few generative AI models (both text and img) to create visuals that are relevant to the song.<p>The video stream is generated in real time at 24fps - you can try it yourself by requesting visuals in the Twitch chat using the "!v" command!<p>Right now it's mostly a fun hack project, but I am tinkering with new model architectures for higher fidelity video as well as an interactive tool so people can make videos with their own audio files.<p>I'd love to hear any feedback or suggestions, thanks!

Show HN: Open sourcing Harmonic, my Android Hacker News client

Show HN: Open sourcing Harmonic, my Android Hacker News client

Show HN: A simple echo server for testing HTTP clients

I have developed an application called "echoserver" and I would like to share its details on Hacker News. The purpose of "echoserver" is to simplify the testing of HTTP clients. It functions as an echo server, meaning it responds to requests by echoing back the received data. This allows users to simulate various server responses and test their HTTP clients accordingly.<p>With "echoserver," users can generate custom responses by specifying the desired status code, headers, and response body. This flexibility enables thorough testing of HTTP clients and simplifies the process of verifying client behavior under various scenarios. Whether it's testing error handling, handling specific headers, or evaluating performance under different response sizes, "echoserver" provides a convenient solution.<p>Overall, "echoserver" aims to streamline the testing process for developers and enhance their ability to verify the functionality and robustness of their HTTP clients. Its simplicity, versatility, and user-friendly interface make it an invaluable tool in the development and testing workflow. I invite the Hacker News community to explore and provide feedback on the app, as I believe it has the potential to greatly benefit developers and testers worldwide.

Show HN: Ki Programming Language

Alpha preview for the ki programming language. Currently linux-x64, macos-x64 only. Windows users can use WSL for now. Feedback is much appreciated.

Show HN: I built a web app for learning Vim from the browser as a 17-year-old

Hey HN!<p>After my own experiences with learning Vim, I wanted to skip the frustrating process of configuring a new tool before even learning how to use it. In an attempt to solve this problem, I started working on Vim Ninja, a web app that would allow developers to learn Vim through interactive lessons in the browser. It’s been a couple of months, and I’m proud to say that I’ve finally released <a href="https://VimNinja.com" rel="nofollow">https://VimNinja.com</a>!<p>Check out a demo of the app here: <a href="https://youtu.be/reukQHKqMZE" rel="nofollow">https://youtu.be/reukQHKqMZE</a>.<p>On the technical side of things, I used SvelteKit to build the entire app and Tailwind, which turned out to be an amazing decision. I actually really like SvelteKit’s filesystem-based router as well as Svelte’s brevity, and Tailwind actually makes styling a fun task for me. I’m using CodeMirror 6 as a base for Vim Ninja’s code editor, and I really prefer it over more feature-packed alternatives like the Monaco Editor, which is what I started out with but soon abandoned due to its worse performance when compared to alternatives like CM6 and the sheer amount of bells and whistles that I just didn’t need.

Show HN: Psychic - An open-source integration platform for unstructured data

My cofounder and I used to work at Robinhood where we shipped the company’s first OAuth integrations, so we know a lot about how data moves between companies.<p>For example, we know that the pain of building new API integrations scales with the level of fragmentation and number of competing "standards". In the current meta, we see this pain with a lot of AI startups who invariably need to connect to their customers data, but have to support 50+ integrations before they even scale to 50+ customers.<p>This is the process for an AI startup to add a new integration for a customer:<p>- Pore over the API docs for each source application and write a connector for each<p>- Play email tag to find the right stakeholders and get them to share sensitive API keys, or give them an OAuth app. It can take 6+ weeks for some platforms to review new OAuth apps<p>- Normalize data that arrives in a different formats from each source (HTML, XML, text dumps, 3 different flavors of markdown, JSON, etc)<p>- Figure out what data should be vectorized, what should be stored as SQL, and what should be discarded<p>- Detect when data has been updated and synchronize it<p>- Monitor when pipelines break so data doesn’t go stale<p>This is a LOT of work for something that doesn’t move the needle on product quality.<p>That’s why we built Psychic.dev to be the fastest and most secure way for startups to connect to their customer’s data. You integrate once with our universal APIs and get N integrations with CRMs, knowledge bases, ticketing systems and more with no incremental engineering effort.<p>We abstract away the quirks of each data source into Document and Conversation data models, and try to find a good balance to allow for deep integrations while maintaining broad utility. Since it’s open source, we encourage founders to fork and extend our data models to fit their needs as they evolve, even if it means migrating off our paid version.<p>To see an example in action, check out our demo repo here: <a href="https://github.com/psychic-api/psychic-langchain-tutorial/">https://github.com/psychic-api/psychic-langchain-tutorial/</a><p>We are also open source and open to contributions, learn more at docs.psychic.dev or by emailing us at founders@psychic.dev!

Show HN: Psychic - An open-source integration platform for unstructured data

My cofounder and I used to work at Robinhood where we shipped the company’s first OAuth integrations, so we know a lot about how data moves between companies.<p>For example, we know that the pain of building new API integrations scales with the level of fragmentation and number of competing "standards". In the current meta, we see this pain with a lot of AI startups who invariably need to connect to their customers data, but have to support 50+ integrations before they even scale to 50+ customers.<p>This is the process for an AI startup to add a new integration for a customer:<p>- Pore over the API docs for each source application and write a connector for each<p>- Play email tag to find the right stakeholders and get them to share sensitive API keys, or give them an OAuth app. It can take 6+ weeks for some platforms to review new OAuth apps<p>- Normalize data that arrives in a different formats from each source (HTML, XML, text dumps, 3 different flavors of markdown, JSON, etc)<p>- Figure out what data should be vectorized, what should be stored as SQL, and what should be discarded<p>- Detect when data has been updated and synchronize it<p>- Monitor when pipelines break so data doesn’t go stale<p>This is a LOT of work for something that doesn’t move the needle on product quality.<p>That’s why we built Psychic.dev to be the fastest and most secure way for startups to connect to their customer’s data. You integrate once with our universal APIs and get N integrations with CRMs, knowledge bases, ticketing systems and more with no incremental engineering effort.<p>We abstract away the quirks of each data source into Document and Conversation data models, and try to find a good balance to allow for deep integrations while maintaining broad utility. Since it’s open source, we encourage founders to fork and extend our data models to fit their needs as they evolve, even if it means migrating off our paid version.<p>To see an example in action, check out our demo repo here: <a href="https://github.com/psychic-api/psychic-langchain-tutorial/">https://github.com/psychic-api/psychic-langchain-tutorial/</a><p>We are also open source and open to contributions, learn more at docs.psychic.dev or by emailing us at founders@psychic.dev!

Show HN: Psychic - An open-source integration platform for unstructured data

My cofounder and I used to work at Robinhood where we shipped the company’s first OAuth integrations, so we know a lot about how data moves between companies.<p>For example, we know that the pain of building new API integrations scales with the level of fragmentation and number of competing "standards". In the current meta, we see this pain with a lot of AI startups who invariably need to connect to their customers data, but have to support 50+ integrations before they even scale to 50+ customers.<p>This is the process for an AI startup to add a new integration for a customer:<p>- Pore over the API docs for each source application and write a connector for each<p>- Play email tag to find the right stakeholders and get them to share sensitive API keys, or give them an OAuth app. It can take 6+ weeks for some platforms to review new OAuth apps<p>- Normalize data that arrives in a different formats from each source (HTML, XML, text dumps, 3 different flavors of markdown, JSON, etc)<p>- Figure out what data should be vectorized, what should be stored as SQL, and what should be discarded<p>- Detect when data has been updated and synchronize it<p>- Monitor when pipelines break so data doesn’t go stale<p>This is a LOT of work for something that doesn’t move the needle on product quality.<p>That’s why we built Psychic.dev to be the fastest and most secure way for startups to connect to their customer’s data. You integrate once with our universal APIs and get N integrations with CRMs, knowledge bases, ticketing systems and more with no incremental engineering effort.<p>We abstract away the quirks of each data source into Document and Conversation data models, and try to find a good balance to allow for deep integrations while maintaining broad utility. Since it’s open source, we encourage founders to fork and extend our data models to fit their needs as they evolve, even if it means migrating off our paid version.<p>To see an example in action, check out our demo repo here: <a href="https://github.com/psychic-api/psychic-langchain-tutorial/">https://github.com/psychic-api/psychic-langchain-tutorial/</a><p>We are also open source and open to contributions, learn more at docs.psychic.dev or by emailing us at founders@psychic.dev!

< 1 2 3 ... 408 409 410 411 412 ... 854 855 856 >