The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I built an Obsidian plugin to create notes from BibTeX

With this plugin you can create literature notes from BibTeX entries, display formatted reference lists, and instantly generate citations.

Show HN: Open-source real time data framework for LLM applications

Hey HN, I am the founder of Tensorlake. Prototyping LLM applications have become a lot easier, building decision making LLM applications that work on constantly updating data is still very challenging in production settings. The systems engineering problems that we have seen people face are -<p>1. Reliably process ingested content in real time if the application is sensitive to freshness of information. 2. Being able to bring in any kind of model, and run different parts of the pipeline on GPUs and CPUs. 3. Fault Tolerance to ingestion spike, compute infrastructure failure. 4. Scaling compute, reads and writes as data volume grows.<p>We are built and open sourced Indexify(<a href="https://github.com/tensorlakeai/indexify">https://github.com/tensorlakeai/indexify</a>), to provide a compute engine and data frameworks to LLM applications that work on dynamic environments where data is updated frequently, or new data is constantly created.<p>Developers describe a declarative extraction graph, with stages that extract or transform unstructured data. Data passes from one stage to another, and end up finally at sinks like Vector Databases, Blob Stores or Structured DataStores like Postgres.<p>Examples - 1. Graph that does Video Understanding could be: Ingestion -> Audio Extraction -> Transcriptions -> NER and Embedding. And another path, Ingestion -> Key Frame Extraction -> Object and Scene Description (<a href="https://github.com/tensorlakeai/indexify/blob/main/docs/docs/examples/Video_RAG.ipynb">https://github.com/tensorlakeai/indexify/blob/main/docs/docs...</a>) 2. Structured Extraction and Search on PDF: PDF -> Markdown -> Chunking -> Embedding, NER (<a href="https://github.com/tensorlakeai/indexify/blob/main/docs/docs/examples/SEC_10_K_docs.ipynb">https://github.com/tensorlakeai/indexify/blob/main/docs/docs...</a>)<p>Application Layer - Indexify works as a retriever in the LLM application stack, so you can use it pretty easily with your existing applications. Call the retriever API over HTTP to get extracted data from Indexify, and that's pretty much all the integration you need to search or retrieve data.<p>You could use composable extractors and chain them together to build complex real time data pipelines that work with any unstructured data.<p>Since this is HN, I have the liberty to talk some technical details :)<p>How is it Real Time? We built a replicated state machine with Raft to process 10s of 1000s of ingestion events every second. The storage and network layer is optimized for progressing the scheduler to create tasks under 2 milliseconds. The architecture of the scheduler is very similar to that of Google's Borg and Hashicorp's Nomad. The architecture we have can be extended to parallel scheduling on multiple machines and have a centralized sequencer like Nomad.<p>Storage Systems: Since the focus is unstructured data, we wanted to be able to support storing and extracting from large files and be able to scale horizontally as data volume grows. Indexify uses blob stores under the hood to store unstructured data. If a graph creates embeddings, they are automatically stored in Vector Stores, and structured data is stored in structured stores like Postgres. Under the hood we have Rust traits between the ingestion server and the data stores, so we can easily implement support for other vector stores.<p>Sync Vector and Structured Store - Indexify also syncs structured data with vector store, if it detects the presence of both in a graph. This allows to use pre-filtering capabilities to narrow down the search space for better results.<p>APIs - Indexify exposes semantic search APIs over vector store, and a read SQL queries over semi-structured data. We can automatically figure out the schema of the structured data and expose a SQL interface on top. Behind the scenes we parse SQL and have a layer which scans and reads databases to slice and dice the rows. So BI tools should work out of the box on extracted data. We have Python and Typescript libraries to make it easy for people to build new or integrate into existing applications.<p>Thoughts? Would love to hear if you think this would be useful to what you are building!

Show HN: Open-source real time data framework for LLM applications

Hey HN, I am the founder of Tensorlake. Prototyping LLM applications have become a lot easier, building decision making LLM applications that work on constantly updating data is still very challenging in production settings. The systems engineering problems that we have seen people face are -<p>1. Reliably process ingested content in real time if the application is sensitive to freshness of information. 2. Being able to bring in any kind of model, and run different parts of the pipeline on GPUs and CPUs. 3. Fault Tolerance to ingestion spike, compute infrastructure failure. 4. Scaling compute, reads and writes as data volume grows.<p>We are built and open sourced Indexify(<a href="https://github.com/tensorlakeai/indexify">https://github.com/tensorlakeai/indexify</a>), to provide a compute engine and data frameworks to LLM applications that work on dynamic environments where data is updated frequently, or new data is constantly created.<p>Developers describe a declarative extraction graph, with stages that extract or transform unstructured data. Data passes from one stage to another, and end up finally at sinks like Vector Databases, Blob Stores or Structured DataStores like Postgres.<p>Examples - 1. Graph that does Video Understanding could be: Ingestion -> Audio Extraction -> Transcriptions -> NER and Embedding. And another path, Ingestion -> Key Frame Extraction -> Object and Scene Description (<a href="https://github.com/tensorlakeai/indexify/blob/main/docs/docs/examples/Video_RAG.ipynb">https://github.com/tensorlakeai/indexify/blob/main/docs/docs...</a>) 2. Structured Extraction and Search on PDF: PDF -> Markdown -> Chunking -> Embedding, NER (<a href="https://github.com/tensorlakeai/indexify/blob/main/docs/docs/examples/SEC_10_K_docs.ipynb">https://github.com/tensorlakeai/indexify/blob/main/docs/docs...</a>)<p>Application Layer - Indexify works as a retriever in the LLM application stack, so you can use it pretty easily with your existing applications. Call the retriever API over HTTP to get extracted data from Indexify, and that's pretty much all the integration you need to search or retrieve data.<p>You could use composable extractors and chain them together to build complex real time data pipelines that work with any unstructured data.<p>Since this is HN, I have the liberty to talk some technical details :)<p>How is it Real Time? We built a replicated state machine with Raft to process 10s of 1000s of ingestion events every second. The storage and network layer is optimized for progressing the scheduler to create tasks under 2 milliseconds. The architecture of the scheduler is very similar to that of Google's Borg and Hashicorp's Nomad. The architecture we have can be extended to parallel scheduling on multiple machines and have a centralized sequencer like Nomad.<p>Storage Systems: Since the focus is unstructured data, we wanted to be able to support storing and extracting from large files and be able to scale horizontally as data volume grows. Indexify uses blob stores under the hood to store unstructured data. If a graph creates embeddings, they are automatically stored in Vector Stores, and structured data is stored in structured stores like Postgres. Under the hood we have Rust traits between the ingestion server and the data stores, so we can easily implement support for other vector stores.<p>Sync Vector and Structured Store - Indexify also syncs structured data with vector store, if it detects the presence of both in a graph. This allows to use pre-filtering capabilities to narrow down the search space for better results.<p>APIs - Indexify exposes semantic search APIs over vector store, and a read SQL queries over semi-structured data. We can automatically figure out the schema of the structured data and expose a SQL interface on top. Behind the scenes we parse SQL and have a layer which scans and reads databases to slice and dice the rows. So BI tools should work out of the box on extracted data. We have Python and Typescript libraries to make it easy for people to build new or integrate into existing applications.<p>Thoughts? Would love to hear if you think this would be useful to what you are building!

Show HN: Spot – Simple, cross-platform, reactive desktop GUI toolkit for Go

Hi HN, I’m excited to share Spot, a simple, cross-platform, React-like GUI library for Go. It is just a few days old and has lots of missing features but I'm happy with the results so far, and looking for some design feedback.<p>Spot is designed to be easy to use and provide a consistent API across different platforms (mainly Mac & Linux). It’s inspired by React, but written in Go, aiming to combine the best of both worlds: the easy tooling & performance of Go with a modern, reactive approach to UI development.<p>Key features:<p>- Cross-platform: Leveraging FLTK[1] & Cocoa[2], Spot works on Mac, Linux, and the BSDs with plans for native Windows support in the future.<p>- Reactive UI: Adopts a React-like model for building UIs, making it intuitive for those familiar with reactive frameworks.<p>- Traditional, native widget set: Utilizes native widgets where available to provide a more traditional look and feel.<p>Why I built it:<p>I was searching for a cross-platform GUI toolkit for Go that had a more traditional appearance, and none of the existing options quite met my needs. I then started playing with Gocoa and go-fltk and suddenly I worked on an experiment to see how challenging it would be to build something like React in Go, and it kinda evolved into Spot. ¯\_(ツ)_/¯<p>In 2024, is there a still place for classic desktop GUIs—even with a modern spin?<p>I’d love to hear your thoughts, feedback, and any suggestions for improvement. Also, contributions are very welcome.<p>Thank you for checking it out!<p>[1] <a href="https://github.com/pwiecz/go-fltk">https://github.com/pwiecz/go-fltk</a><p>[2] <a href="https://github.com/roblillack/gocoa">https://github.com/roblillack/gocoa</a>

Show HN: Spot – Simple, cross-platform, reactive desktop GUI toolkit for Go

Hi HN, I’m excited to share Spot, a simple, cross-platform, React-like GUI library for Go. It is just a few days old and has lots of missing features but I'm happy with the results so far, and looking for some design feedback.<p>Spot is designed to be easy to use and provide a consistent API across different platforms (mainly Mac & Linux). It’s inspired by React, but written in Go, aiming to combine the best of both worlds: the easy tooling & performance of Go with a modern, reactive approach to UI development.<p>Key features:<p>- Cross-platform: Leveraging FLTK[1] & Cocoa[2], Spot works on Mac, Linux, and the BSDs with plans for native Windows support in the future.<p>- Reactive UI: Adopts a React-like model for building UIs, making it intuitive for those familiar with reactive frameworks.<p>- Traditional, native widget set: Utilizes native widgets where available to provide a more traditional look and feel.<p>Why I built it:<p>I was searching for a cross-platform GUI toolkit for Go that had a more traditional appearance, and none of the existing options quite met my needs. I then started playing with Gocoa and go-fltk and suddenly I worked on an experiment to see how challenging it would be to build something like React in Go, and it kinda evolved into Spot. ¯\_(ツ)_/¯<p>In 2024, is there a still place for classic desktop GUIs—even with a modern spin?<p>I’d love to hear your thoughts, feedback, and any suggestions for improvement. Also, contributions are very welcome.<p>Thank you for checking it out!<p>[1] <a href="https://github.com/pwiecz/go-fltk">https://github.com/pwiecz/go-fltk</a><p>[2] <a href="https://github.com/roblillack/gocoa">https://github.com/roblillack/gocoa</a>

Show HN: Spot – Simple, cross-platform, reactive desktop GUI toolkit for Go

Hi HN, I’m excited to share Spot, a simple, cross-platform, React-like GUI library for Go. It is just a few days old and has lots of missing features but I'm happy with the results so far, and looking for some design feedback.<p>Spot is designed to be easy to use and provide a consistent API across different platforms (mainly Mac & Linux). It’s inspired by React, but written in Go, aiming to combine the best of both worlds: the easy tooling & performance of Go with a modern, reactive approach to UI development.<p>Key features:<p>- Cross-platform: Leveraging FLTK[1] & Cocoa[2], Spot works on Mac, Linux, and the BSDs with plans for native Windows support in the future.<p>- Reactive UI: Adopts a React-like model for building UIs, making it intuitive for those familiar with reactive frameworks.<p>- Traditional, native widget set: Utilizes native widgets where available to provide a more traditional look and feel.<p>Why I built it:<p>I was searching for a cross-platform GUI toolkit for Go that had a more traditional appearance, and none of the existing options quite met my needs. I then started playing with Gocoa and go-fltk and suddenly I worked on an experiment to see how challenging it would be to build something like React in Go, and it kinda evolved into Spot. ¯\_(ツ)_/¯<p>In 2024, is there a still place for classic desktop GUIs—even with a modern spin?<p>I’d love to hear your thoughts, feedback, and any suggestions for improvement. Also, contributions are very welcome.<p>Thank you for checking it out!<p>[1] <a href="https://github.com/pwiecz/go-fltk">https://github.com/pwiecz/go-fltk</a><p>[2] <a href="https://github.com/roblillack/gocoa">https://github.com/roblillack/gocoa</a>

Show HN: Spot – Simple, cross-platform, reactive desktop GUI toolkit for Go

Hi HN, I’m excited to share Spot, a simple, cross-platform, React-like GUI library for Go. It is just a few days old and has lots of missing features but I'm happy with the results so far, and looking for some design feedback.<p>Spot is designed to be easy to use and provide a consistent API across different platforms (mainly Mac & Linux). It’s inspired by React, but written in Go, aiming to combine the best of both worlds: the easy tooling & performance of Go with a modern, reactive approach to UI development.<p>Key features:<p>- Cross-platform: Leveraging FLTK[1] & Cocoa[2], Spot works on Mac, Linux, and the BSDs with plans for native Windows support in the future.<p>- Reactive UI: Adopts a React-like model for building UIs, making it intuitive for those familiar with reactive frameworks.<p>- Traditional, native widget set: Utilizes native widgets where available to provide a more traditional look and feel.<p>Why I built it:<p>I was searching for a cross-platform GUI toolkit for Go that had a more traditional appearance, and none of the existing options quite met my needs. I then started playing with Gocoa and go-fltk and suddenly I worked on an experiment to see how challenging it would be to build something like React in Go, and it kinda evolved into Spot. ¯\_(ツ)_/¯<p>In 2024, is there a still place for classic desktop GUIs—even with a modern spin?<p>I’d love to hear your thoughts, feedback, and any suggestions for improvement. Also, contributions are very welcome.<p>Thank you for checking it out!<p>[1] <a href="https://github.com/pwiecz/go-fltk">https://github.com/pwiecz/go-fltk</a><p>[2] <a href="https://github.com/roblillack/gocoa">https://github.com/roblillack/gocoa</a>

Show HN: We open sourced our entire text-to-SQL product

Long story short: We (Dataherald) just open-sourced our entire codebase, including the core engine, the clients that interact with it and the backend application layer for authentication and RBAC. You can now use the full solution to build text-to-SQL into your product.<p>The Problem: modern LLMs write syntactically correct SQL, but they struggle with real-world relational data. This is because real world data and schema is messy, natural language can often be ambiguous and LLMs are not trained on your specific dataset.<p>Solution: The core NL-to-SQL engine in Dataherald is an LLM based agent which uses Chain of Thought (CoT) reasoning and a number of different tools to generate high accuracy SQL from a given user prompt. The engine achieves this by:<p>- Collecting context at configuration from the database and sources such as data dictionaries and unstructured documents which are stored in a data store or a vector DB and injected if relevant<p>- Allowing users to upload sample NL <> SQL pairs (golden SQL) which can be used in few shot prompting or to fine-tune an NL-to-SQL LLM for that specific dataset<p>- Executing the SQL against the DB to get a few sample rows and recover from errors<p>- Using an evaluator to assign a confidence score to the generated SQL<p>The repo includes four services <a href="https://github.com/Dataherald/dataherald/tree/main/services">https://github.com/Dataherald/dataherald/tree/main/services</a>:<p>1- Engine: The core service which includes the LLM agent, vector stores and DB connectors.<p>2- Admin Console: a NextJS front-end for configuring the engine and observability.<p>3- Enterprise Backend: Wraps the core engine, adding authentication, caching, and APIs for the frontend.<p>4- Slackbot: Integrate Dataherald directly into your Slack workflow for on-the-fly data exploration.<p>Would love to hear from the community on building natural language interfaces to relational data. Anyone live in production without a human in the loop? Thoughts on how to improve performance without spending weeks on model training?

Show HN: We open sourced our entire text-to-SQL product

Long story short: We (Dataherald) just open-sourced our entire codebase, including the core engine, the clients that interact with it and the backend application layer for authentication and RBAC. You can now use the full solution to build text-to-SQL into your product.<p>The Problem: modern LLMs write syntactically correct SQL, but they struggle with real-world relational data. This is because real world data and schema is messy, natural language can often be ambiguous and LLMs are not trained on your specific dataset.<p>Solution: The core NL-to-SQL engine in Dataherald is an LLM based agent which uses Chain of Thought (CoT) reasoning and a number of different tools to generate high accuracy SQL from a given user prompt. The engine achieves this by:<p>- Collecting context at configuration from the database and sources such as data dictionaries and unstructured documents which are stored in a data store or a vector DB and injected if relevant<p>- Allowing users to upload sample NL <> SQL pairs (golden SQL) which can be used in few shot prompting or to fine-tune an NL-to-SQL LLM for that specific dataset<p>- Executing the SQL against the DB to get a few sample rows and recover from errors<p>- Using an evaluator to assign a confidence score to the generated SQL<p>The repo includes four services <a href="https://github.com/Dataherald/dataherald/tree/main/services">https://github.com/Dataherald/dataherald/tree/main/services</a>:<p>1- Engine: The core service which includes the LLM agent, vector stores and DB connectors.<p>2- Admin Console: a NextJS front-end for configuring the engine and observability.<p>3- Enterprise Backend: Wraps the core engine, adding authentication, caching, and APIs for the frontend.<p>4- Slackbot: Integrate Dataherald directly into your Slack workflow for on-the-fly data exploration.<p>Would love to hear from the community on building natural language interfaces to relational data. Anyone live in production without a human in the loop? Thoughts on how to improve performance without spending weeks on model training?

Show HN: We open sourced our entire text-to-SQL product

Long story short: We (Dataherald) just open-sourced our entire codebase, including the core engine, the clients that interact with it and the backend application layer for authentication and RBAC. You can now use the full solution to build text-to-SQL into your product.<p>The Problem: modern LLMs write syntactically correct SQL, but they struggle with real-world relational data. This is because real world data and schema is messy, natural language can often be ambiguous and LLMs are not trained on your specific dataset.<p>Solution: The core NL-to-SQL engine in Dataherald is an LLM based agent which uses Chain of Thought (CoT) reasoning and a number of different tools to generate high accuracy SQL from a given user prompt. The engine achieves this by:<p>- Collecting context at configuration from the database and sources such as data dictionaries and unstructured documents which are stored in a data store or a vector DB and injected if relevant<p>- Allowing users to upload sample NL <> SQL pairs (golden SQL) which can be used in few shot prompting or to fine-tune an NL-to-SQL LLM for that specific dataset<p>- Executing the SQL against the DB to get a few sample rows and recover from errors<p>- Using an evaluator to assign a confidence score to the generated SQL<p>The repo includes four services <a href="https://github.com/Dataherald/dataherald/tree/main/services">https://github.com/Dataherald/dataherald/tree/main/services</a>:<p>1- Engine: The core service which includes the LLM agent, vector stores and DB connectors.<p>2- Admin Console: a NextJS front-end for configuring the engine and observability.<p>3- Enterprise Backend: Wraps the core engine, adding authentication, caching, and APIs for the frontend.<p>4- Slackbot: Integrate Dataherald directly into your Slack workflow for on-the-fly data exploration.<p>Would love to hear from the community on building natural language interfaces to relational data. Anyone live in production without a human in the loop? Thoughts on how to improve performance without spending weeks on model training?

Show HN: Excel to Python Compiler

We (me and @aarondia) built a tool to help you turn psuedo-software Excel files into real-software Python. Ideally, Pyoneer helps you automate your manual Excel processes. You can try it today here: <a href="https://pyoneer.ai" rel="nofollow">https://pyoneer.ai</a>.<p><i>How it works:</i><p>1. You upload an Excel file<p>2. We statically parse the Excel file and build a dependency graph of all the cells, tables, formulas, and pivots.<p>3. We do a graph traversal, and translate nodes as we hit them. We use OpenAI APIs to translate formulas. There’s a bunch of extra work here — because even with the best prompt engineering a fella like me can do, OpenAI sucks at translating formulas (primarily because it doesn’t know what datatypes its dealing with). We augment this translation with a mapping from ranges to variable names and types, which in our experience can improve the percentage of correctly translatable formulas by about 5x.<p>4. We generate test cases for our translations as well, to make sure the Python process matches your Excel process.<p>5. We give you back a Jupyter notebook that contains the code we generated.<p>If there are pieces of the Excel we can’t translate successfully (complex formulas, or pivot tables currently), then we leave them as a TODO in the code. This makes it easy for you to hop in and continue finishing the script.<p><i>Who is this for:</i><p>Developers who know Python, primarily! Pyoneer might be useful if:<p>1. You’ve got an Excel file you’re looking to move to Python (usually for speed, size, or maintenance reasons).<p>2. There’s enough logic contained in the notebook that it’s going to be a hassle for you to just rewrite it from scratch.<p>3. Or you don’t know the logic that is in the Excel workbook well since you didn’t write it in the first place :)<p>Post translation, even if Pyoneer doesn't nail it perfectly or translate all the formulas, you'll be able to pop into the notebook and continue cleaning up the TODOs / finish writing the formulas.<p><i>What the Alpha launch supports:</i><p>Launched early! Currently we’re focused on supporting:<p>1. Any number of sheets, with any reference structure between them.<p>2. Cells that translate as variables directly. We’ll translate the formulas to Python code that has the same result, or else we’ll generate a TODO letting you know we failed translating this cell.<p>3. Tables that translate as Pandas dataframes. We support at most one table per sheet, at the tables must be contigious. If the formulas in a column are consistent, then we will try and translate this as a single pandas statement.<p>We do not support: pivot tables or complex formulas. When we fail to translate these, we generate TODO statements. We also don’t support graphs or macros - and you won’t see these reflected in the output at all currently.<p><i>Why we built this:</i><p>We did YCS20 and built an open source tool called Mito(<a href="https://trymito.io">https://trymito.io</a>). It’s been a good journey since then - we’ve scaled revenue and to over 2k Github stars (<a href="https://github.com/mito-ds/mito">https://github.com/mito-ds/mito</a>). But fundamentally, Mito is a tool that’s useful for Excel users who wanted to start writing Python code more effectively.<p>We wanted to take another stab at the Excel -> Python pain point that was more developer focused - that helped developers that have to translate Excel files into Python do this much more quickly. Hence, Pyoneer!<p>I’ll be in the comments today if you’ve got feedback, criticism, questions, or comments.

Show HN: Excel to Python Compiler

We (me and @aarondia) built a tool to help you turn psuedo-software Excel files into real-software Python. Ideally, Pyoneer helps you automate your manual Excel processes. You can try it today here: <a href="https://pyoneer.ai" rel="nofollow">https://pyoneer.ai</a>.<p><i>How it works:</i><p>1. You upload an Excel file<p>2. We statically parse the Excel file and build a dependency graph of all the cells, tables, formulas, and pivots.<p>3. We do a graph traversal, and translate nodes as we hit them. We use OpenAI APIs to translate formulas. There’s a bunch of extra work here — because even with the best prompt engineering a fella like me can do, OpenAI sucks at translating formulas (primarily because it doesn’t know what datatypes its dealing with). We augment this translation with a mapping from ranges to variable names and types, which in our experience can improve the percentage of correctly translatable formulas by about 5x.<p>4. We generate test cases for our translations as well, to make sure the Python process matches your Excel process.<p>5. We give you back a Jupyter notebook that contains the code we generated.<p>If there are pieces of the Excel we can’t translate successfully (complex formulas, or pivot tables currently), then we leave them as a TODO in the code. This makes it easy for you to hop in and continue finishing the script.<p><i>Who is this for:</i><p>Developers who know Python, primarily! Pyoneer might be useful if:<p>1. You’ve got an Excel file you’re looking to move to Python (usually for speed, size, or maintenance reasons).<p>2. There’s enough logic contained in the notebook that it’s going to be a hassle for you to just rewrite it from scratch.<p>3. Or you don’t know the logic that is in the Excel workbook well since you didn’t write it in the first place :)<p>Post translation, even if Pyoneer doesn't nail it perfectly or translate all the formulas, you'll be able to pop into the notebook and continue cleaning up the TODOs / finish writing the formulas.<p><i>What the Alpha launch supports:</i><p>Launched early! Currently we’re focused on supporting:<p>1. Any number of sheets, with any reference structure between them.<p>2. Cells that translate as variables directly. We’ll translate the formulas to Python code that has the same result, or else we’ll generate a TODO letting you know we failed translating this cell.<p>3. Tables that translate as Pandas dataframes. We support at most one table per sheet, at the tables must be contigious. If the formulas in a column are consistent, then we will try and translate this as a single pandas statement.<p>We do not support: pivot tables or complex formulas. When we fail to translate these, we generate TODO statements. We also don’t support graphs or macros - and you won’t see these reflected in the output at all currently.<p><i>Why we built this:</i><p>We did YCS20 and built an open source tool called Mito(<a href="https://trymito.io">https://trymito.io</a>). It’s been a good journey since then - we’ve scaled revenue and to over 2k Github stars (<a href="https://github.com/mito-ds/mito">https://github.com/mito-ds/mito</a>). But fundamentally, Mito is a tool that’s useful for Excel users who wanted to start writing Python code more effectively.<p>We wanted to take another stab at the Excel -> Python pain point that was more developer focused - that helped developers that have to translate Excel files into Python do this much more quickly. Hence, Pyoneer!<p>I’ll be in the comments today if you’ve got feedback, criticism, questions, or comments.

Show HN: Porter Cloud – PaaS with an eject button

Hi HN! Porter Cloud (<a href="https://porter.run/porter-cloud">https://porter.run/porter-cloud</a>) is a Platform as a Service (PaaS) like Heroku, but we make it easy for you to migrate to AWS, Azure, or GCP when you're ready.<p>Like Heroku, Porter takes care of a lot of generic DevOps work for you (like setting up CI/CD, containerizing your applications, autoscaling, SSL certificates, setting up a reverse proxy) and lets you deploy your apps with a few clicks — saving you a lot of time while developing. However, as you probably know, there’s a downside: platforms like this become constraining if and when your app takes off and you need to scale. The time you saved while developing can get pretty expensive once you’re paying for a lot of users — and the platforms tend to try to keep you locked in!<p>Our idea is to give you the best of both worlds: use Porter Cloud for as long as it saves you time and development cost, but at any time you can press the “eject button” to migrate your app to your own AWS, Azure, or GCP account as you please. We make it seamless to break out, so you’re no longer subject to the rigid constraints of a conventional PaaS. You can migrate in a few simple steps outlined here: <a href="https://docs.porter.run/other/eject">https://docs.porter.run/other/eject</a>.<p>A bit of background: we first launched on HN almost 3 years ago with our original product (<a href="https://news.ycombinator.com/item?id=26993421">https://news.ycombinator.com/item?id=26993421</a>, <a href="https://porter.run">https://porter.run</a>), which deploys your applications to your own AWS, Azure, or GCP account with the simple experience of a PaaS.<p>Since then, we’ve helped countless companies migrate from a PaaS to one of the big three cloud providers. Most of them had gotten started on a PaaS in the early days to optimize for speed and ease of use, but ultimately had to go through a painful migration to AWS, Azure, or GCP as they scaled and ran into various constraints on their original PaaS.<p>Interestingly, we learned that many companies that start on a PaaS are fully aware that they’ll have to migrate to one of the big three public clouds [1] at some point. Yet they choose to deploy on a PaaS anyway because outgrowing a cloud platform is a “champagne problem” when you’re focused on getting something off the ground. This, however, becomes a very tangible problem when you need to migrate your entire production infrastructure while serving many users at scale. It’s a “nice problem to have”, until it isn’t.<p>We’ve built Porter Cloud so that the next generation of startups can get off the ground as quickly as possible, with a peace of mind that you can effortlessly move to one of the tried and true hyperscalers when you are ready to scale.<p>We are excited to see what people build on Porter Cloud. If you’ve ever dealt with a migration from a PaaS to one of the big three cloud providers, we’d also love to hear about your experience in the comments. Looking forward to feedback and discussion!<p>[1] By “big three clouds” we mean the lower-level primitives of each cloud provider. We don’t mean their higher level offerings like AWS App Runner, Google Cloud Run, or Azure App Service, since those run into the same PaaS problems described above.

Show HN: Porter Cloud – PaaS with an eject button

Hi HN! Porter Cloud (<a href="https://porter.run/porter-cloud">https://porter.run/porter-cloud</a>) is a Platform as a Service (PaaS) like Heroku, but we make it easy for you to migrate to AWS, Azure, or GCP when you're ready.<p>Like Heroku, Porter takes care of a lot of generic DevOps work for you (like setting up CI/CD, containerizing your applications, autoscaling, SSL certificates, setting up a reverse proxy) and lets you deploy your apps with a few clicks — saving you a lot of time while developing. However, as you probably know, there’s a downside: platforms like this become constraining if and when your app takes off and you need to scale. The time you saved while developing can get pretty expensive once you’re paying for a lot of users — and the platforms tend to try to keep you locked in!<p>Our idea is to give you the best of both worlds: use Porter Cloud for as long as it saves you time and development cost, but at any time you can press the “eject button” to migrate your app to your own AWS, Azure, or GCP account as you please. We make it seamless to break out, so you’re no longer subject to the rigid constraints of a conventional PaaS. You can migrate in a few simple steps outlined here: <a href="https://docs.porter.run/other/eject">https://docs.porter.run/other/eject</a>.<p>A bit of background: we first launched on HN almost 3 years ago with our original product (<a href="https://news.ycombinator.com/item?id=26993421">https://news.ycombinator.com/item?id=26993421</a>, <a href="https://porter.run">https://porter.run</a>), which deploys your applications to your own AWS, Azure, or GCP account with the simple experience of a PaaS.<p>Since then, we’ve helped countless companies migrate from a PaaS to one of the big three cloud providers. Most of them had gotten started on a PaaS in the early days to optimize for speed and ease of use, but ultimately had to go through a painful migration to AWS, Azure, or GCP as they scaled and ran into various constraints on their original PaaS.<p>Interestingly, we learned that many companies that start on a PaaS are fully aware that they’ll have to migrate to one of the big three public clouds [1] at some point. Yet they choose to deploy on a PaaS anyway because outgrowing a cloud platform is a “champagne problem” when you’re focused on getting something off the ground. This, however, becomes a very tangible problem when you need to migrate your entire production infrastructure while serving many users at scale. It’s a “nice problem to have”, until it isn’t.<p>We’ve built Porter Cloud so that the next generation of startups can get off the ground as quickly as possible, with a peace of mind that you can effortlessly move to one of the tried and true hyperscalers when you are ready to scale.<p>We are excited to see what people build on Porter Cloud. If you’ve ever dealt with a migration from a PaaS to one of the big three cloud providers, we’d also love to hear about your experience in the comments. Looking forward to feedback and discussion!<p>[1] By “big three clouds” we mean the lower-level primitives of each cloud provider. We don’t mean their higher level offerings like AWS App Runner, Google Cloud Run, or Azure App Service, since those run into the same PaaS problems described above.

Show HN: Porter Cloud – PaaS with an eject button

Hi HN! Porter Cloud (<a href="https://porter.run/porter-cloud">https://porter.run/porter-cloud</a>) is a Platform as a Service (PaaS) like Heroku, but we make it easy for you to migrate to AWS, Azure, or GCP when you're ready.<p>Like Heroku, Porter takes care of a lot of generic DevOps work for you (like setting up CI/CD, containerizing your applications, autoscaling, SSL certificates, setting up a reverse proxy) and lets you deploy your apps with a few clicks — saving you a lot of time while developing. However, as you probably know, there’s a downside: platforms like this become constraining if and when your app takes off and you need to scale. The time you saved while developing can get pretty expensive once you’re paying for a lot of users — and the platforms tend to try to keep you locked in!<p>Our idea is to give you the best of both worlds: use Porter Cloud for as long as it saves you time and development cost, but at any time you can press the “eject button” to migrate your app to your own AWS, Azure, or GCP account as you please. We make it seamless to break out, so you’re no longer subject to the rigid constraints of a conventional PaaS. You can migrate in a few simple steps outlined here: <a href="https://docs.porter.run/other/eject">https://docs.porter.run/other/eject</a>.<p>A bit of background: we first launched on HN almost 3 years ago with our original product (<a href="https://news.ycombinator.com/item?id=26993421">https://news.ycombinator.com/item?id=26993421</a>, <a href="https://porter.run">https://porter.run</a>), which deploys your applications to your own AWS, Azure, or GCP account with the simple experience of a PaaS.<p>Since then, we’ve helped countless companies migrate from a PaaS to one of the big three cloud providers. Most of them had gotten started on a PaaS in the early days to optimize for speed and ease of use, but ultimately had to go through a painful migration to AWS, Azure, or GCP as they scaled and ran into various constraints on their original PaaS.<p>Interestingly, we learned that many companies that start on a PaaS are fully aware that they’ll have to migrate to one of the big three public clouds [1] at some point. Yet they choose to deploy on a PaaS anyway because outgrowing a cloud platform is a “champagne problem” when you’re focused on getting something off the ground. This, however, becomes a very tangible problem when you need to migrate your entire production infrastructure while serving many users at scale. It’s a “nice problem to have”, until it isn’t.<p>We’ve built Porter Cloud so that the next generation of startups can get off the ground as quickly as possible, with a peace of mind that you can effortlessly move to one of the tried and true hyperscalers when you are ready to scale.<p>We are excited to see what people build on Porter Cloud. If you’ve ever dealt with a migration from a PaaS to one of the big three cloud providers, we’d also love to hear about your experience in the comments. Looking forward to feedback and discussion!<p>[1] By “big three clouds” we mean the lower-level primitives of each cloud provider. We don’t mean their higher level offerings like AWS App Runner, Google Cloud Run, or Azure App Service, since those run into the same PaaS problems described above.

Show HN: HackerNews but for research papers

Hey guys, I love HN! I wanted to extend the same aesthetic and community towards things beyond tech-related news.<p>I thought it would be cool to get the same quality of community gathered around the latest and greatest research coming out.<p>Let me know what you guys think of what I have so far. It's still early so there are probably bugs and other quality issues.<p>If there's any features missing that you'd want let me know.<p>ALSO, if any of you are familiar with the map of the territory of any particular field, please let me know! Would love to pick your brain and to come up with a 'most important papers' section for each field.<p>Thank you!!<p>-stefan

Show HN: HackerNews but for research papers

Hey guys, I love HN! I wanted to extend the same aesthetic and community towards things beyond tech-related news.<p>I thought it would be cool to get the same quality of community gathered around the latest and greatest research coming out.<p>Let me know what you guys think of what I have so far. It's still early so there are probably bugs and other quality issues.<p>If there's any features missing that you'd want let me know.<p>ALSO, if any of you are familiar with the map of the territory of any particular field, please let me know! Would love to pick your brain and to come up with a 'most important papers' section for each field.<p>Thank you!!<p>-stefan

Show HN: HackerNews but for research papers

Hey guys, I love HN! I wanted to extend the same aesthetic and community towards things beyond tech-related news.<p>I thought it would be cool to get the same quality of community gathered around the latest and greatest research coming out.<p>Let me know what you guys think of what I have so far. It's still early so there are probably bugs and other quality issues.<p>If there's any features missing that you'd want let me know.<p>ALSO, if any of you are familiar with the map of the territory of any particular field, please let me know! Would love to pick your brain and to come up with a 'most important papers' section for each field.<p>Thank you!!<p>-stefan

Show HN: HackerNews but for research papers

Hey guys, I love HN! I wanted to extend the same aesthetic and community towards things beyond tech-related news.<p>I thought it would be cool to get the same quality of community gathered around the latest and greatest research coming out.<p>Let me know what you guys think of what I have so far. It's still early so there are probably bugs and other quality issues.<p>If there's any features missing that you'd want let me know.<p>ALSO, if any of you are familiar with the map of the territory of any particular field, please let me know! Would love to pick your brain and to come up with a 'most important papers' section for each field.<p>Thank you!!<p>-stefan

Show HN: Adblock for Podcasts

This is a small app that achieves surprisingly good podcast adblocking. It transcribes the podcast, identifies ad segments in the transcript, then creates a new version of the podcast without the ads.

< 1 2 3 ... 132 133 134 135 136 ... 761 762 763 >