The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Plato – Airtable for your SQL database
Hi! I've been a member of HN for fifteen years so today I'm very excited to share Plato.<p>Plato is an Airtable-like interface for your Postgres or MySQL database. It's an admin panel for devs and non-devs alike to manage your DB. We see teams use Plato for customer support, customer success, ops, etc..<p>We built Plato because we think more people should be able to build and extend internal tools. We thought it was strange that even though low-code is supposed to democratize development, all of the low-code internal tool builders are marketed to engineers! Airtable is a familiar UI that fits the relational model well, so we've been inspired by their work. Even the engineers on our team use Plato quite a bit, since it's often easier than spinning up a SQL prompt.<p>Some features:<p>- Postgres and MySQL support<p>- Visual query controls (sorts, filters, hiding columns). No SQL.<p>- Joins by "expanding" foreign keys<p>- Virtual columns for tracking new data<p>- Auto-generated backlinks for one-to-many relationships<p>- Read-only locking for individual tables<p>- Virtual tables for sharing new views with your team<p>Plato today works on databases with a public IP (just whitelist our IP to connect), but we're soon rolling out an on-prem version. We can also set up an SSH tunnel for you if you contact us at team@plato.io.<p>We'd love to hear your feedback! Thanks.<p>- Michael
Show HN: Plato – Airtable for your SQL database
Hi! I've been a member of HN for fifteen years so today I'm very excited to share Plato.<p>Plato is an Airtable-like interface for your Postgres or MySQL database. It's an admin panel for devs and non-devs alike to manage your DB. We see teams use Plato for customer support, customer success, ops, etc..<p>We built Plato because we think more people should be able to build and extend internal tools. We thought it was strange that even though low-code is supposed to democratize development, all of the low-code internal tool builders are marketed to engineers! Airtable is a familiar UI that fits the relational model well, so we've been inspired by their work. Even the engineers on our team use Plato quite a bit, since it's often easier than spinning up a SQL prompt.<p>Some features:<p>- Postgres and MySQL support<p>- Visual query controls (sorts, filters, hiding columns). No SQL.<p>- Joins by "expanding" foreign keys<p>- Virtual columns for tracking new data<p>- Auto-generated backlinks for one-to-many relationships<p>- Read-only locking for individual tables<p>- Virtual tables for sharing new views with your team<p>Plato today works on databases with a public IP (just whitelist our IP to connect), but we're soon rolling out an on-prem version. We can also set up an SSH tunnel for you if you contact us at team@plato.io.<p>We'd love to hear your feedback! Thanks.<p>- Michael
Show HN: Construct Animate – our new browser-based animation tool
Show HN: Construct Animate – our new browser-based animation tool
Show HN: CodeGPT.nvim – ChatGPT plugin for Neovim
Show HN: CodeGPT.nvim – ChatGPT plugin for Neovim
Show HN: CodeGPT.nvim – ChatGPT plugin for Neovim
Show HN: CodeGPT.nvim – ChatGPT plugin for Neovim
Show HN: BBC “In Our Time”, categorised by Dewey Decimal, heavy lifting by GPT
I'm a big fan of the BBC podcast In Our Time -- and (like most people) I've been playing with the OpenAI APIs.<p>In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it's my starting point to learn about most topics. But it's not well organised.<p>So here are the episodes sorted by library code. It's fun to explore.<p>Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.<p>At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.<p>My takeaway is that I'll be using LLMs as function call way more in the future. This isn't "generative" AI, more "programmatic" AI perhaps?<p>So I'm interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like.
Show HN: BBC “In Our Time”, categorised by Dewey Decimal, heavy lifting by GPT
I'm a big fan of the BBC podcast In Our Time -- and (like most people) I've been playing with the OpenAI APIs.<p>In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it's my starting point to learn about most topics. But it's not well organised.<p>So here are the episodes sorted by library code. It's fun to explore.<p>Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.<p>At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.<p>My takeaway is that I'll be using LLMs as function call way more in the future. This isn't "generative" AI, more "programmatic" AI perhaps?<p>So I'm interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like.
Show HN: BBC “In Our Time”, categorised by Dewey Decimal, heavy lifting by GPT
I'm a big fan of the BBC podcast In Our Time -- and (like most people) I've been playing with the OpenAI APIs.<p>In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it's my starting point to learn about most topics. But it's not well organised.<p>So here are the episodes sorted by library code. It's fun to explore.<p>Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.<p>At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.<p>My takeaway is that I'll be using LLMs as function call way more in the future. This isn't "generative" AI, more "programmatic" AI perhaps?<p>So I'm interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like.
Show HN: BBC “In Our Time”, categorised by Dewey Decimal, heavy lifting by GPT
I'm a big fan of the BBC podcast In Our Time -- and (like most people) I've been playing with the OpenAI APIs.<p>In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it's my starting point to learn about most topics. But it's not well organised.<p>So here are the episodes sorted by library code. It's fun to explore.<p>Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.<p>At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.<p>My takeaway is that I'll be using LLMs as function call way more in the future. This isn't "generative" AI, more "programmatic" AI perhaps?<p>So I'm interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like.
Show HN: BBC “In Our Time”, categorised by Dewey Decimal, heavy lifting by GPT
I'm a big fan of the BBC podcast In Our Time -- and (like most people) I've been playing with the OpenAI APIs.<p>In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it's my starting point to learn about most topics. But it's not well organised.<p>So here are the episodes sorted by library code. It's fun to explore.<p>Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.<p>At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.<p>My takeaway is that I'll be using LLMs as function call way more in the future. This isn't "generative" AI, more "programmatic" AI perhaps?<p>So I'm interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like.
Show HN: ChatGPT Inline Bot on Telegram
Show HN: Regex Derivatives (Brzozowski Derivatives)
A Python sketch of a regex engine in less than 150 lines of code
Show HN: Bearer – Open-source code security scanning solution (SAST)
Hi HN,<p>we’re the co-founders of Bearer, and today we launch an open-source alternative to code security solutions such as Snyk Code, SonarQube, or Checkmarx. Essentially, we help security & engineering teams to discover, filter and prioritize security risks and vulnerabilities in their codebase, with a unique approach through sensitive data (PII, PD, PHI).<p>Our website is at <a href="https://www.bearer.com" rel="nofollow">https://www.bearer.com</a> and our GitHub is here: <a href="https://github.com/bearer/bearer">https://github.com/bearer/bearer</a><p>We are not originally Security experts but have been software developers and engineering leaders for over 15 years now, and we thought we could provide a new perspective to security products with a strong emphasis on the developer experience, something we often found lacking for security tools.<p>In addition to building a true developer-friendly security solution, we’ve also heard a lot of teams complaining about how noisy their static code security solutions are. As a result, they often have difficulties triaging the most important issues, and ultimately it’s difficult to remediate them. We believe an important part of the problem lies in the fact that we lack a clear understanding of the real impact of any security issues. Without that understanding, it’s very difficult to ask developers to remediate critical security flaws.<p>We’ve built a unique approach to this problem, by looking at the impact of security issues through the lens of sensitive data. Interestingly, most security team ultimate responsibility today is to secure those sensitive data and protect their organization from costly data loss and leakage, but until today, that connection has never been made.<p>In practical terms, we provide a set of rules that assess the variety of ways known code vulnerabilities (CWE) ultimately impact your application security, and we reconcile it with your sensitive data flows. At the time of this writing, Bearer provides over 100 rules.<p>Here are some examples of what those rules can detect:
- Leakage of sensitive data through cookies, internal loggers, third-party logging services, and into analytics environments.
- Non-filtered user input that can lead to breaches of sensitive information.
- Usage of weak encryption libraries or misusage of encryption algorithms.
- Unencrypted incoming and outgoing communication (HTTP, FTP, SMTP) of sensitive information.
- Hard-coded secrets and tokens.
- And many you can find see here: <a href="https://docs.bearer.com/reference/rules/" rel="nofollow">https://docs.bearer.com/reference/rules/</a><p>Rules are easily extendable to allow you to create your own, everything is YAML based. For example, some of our early users used this system to detect the leakage of sensitive data in their backup environments or missing application-level encryption of their health data.<p>I’m sure you are wondering how can we detect sensitive data flows just by looking at the code. Essentially, we also perform static code analysis to detect those. In a nutshell, we look for those sensitive data flows at two levels:
- Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc.
- Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.<p>Then we pass this over to a classification engine that assess 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). All of that is documented here: <a href="https://docs.bearer.com/explanations/discovery-and-classification/" rel="nofollow">https://docs.bearer.com/explanations/discovery-and-classific...</a><p>As we said before, developer experience is key, that’s why you can install Bearer in 15 seconds, from cURL, Homebrew, apt-get, yum, or as a docker image. Then you run it as a CLI locally, or as part of your CI/CD.<p>We currently support JavaScript and Ruby stacks, but more will follow shortly!<p>Please let us know what you think and check out the repo here: <a href="https://github.com/Bearer/bearer">https://github.com/Bearer/bearer</a>
Show HN: Bearer – Open-source code security scanning solution (SAST)
Hi HN,<p>we’re the co-founders of Bearer, and today we launch an open-source alternative to code security solutions such as Snyk Code, SonarQube, or Checkmarx. Essentially, we help security & engineering teams to discover, filter and prioritize security risks and vulnerabilities in their codebase, with a unique approach through sensitive data (PII, PD, PHI).<p>Our website is at <a href="https://www.bearer.com" rel="nofollow">https://www.bearer.com</a> and our GitHub is here: <a href="https://github.com/bearer/bearer">https://github.com/bearer/bearer</a><p>We are not originally Security experts but have been software developers and engineering leaders for over 15 years now, and we thought we could provide a new perspective to security products with a strong emphasis on the developer experience, something we often found lacking for security tools.<p>In addition to building a true developer-friendly security solution, we’ve also heard a lot of teams complaining about how noisy their static code security solutions are. As a result, they often have difficulties triaging the most important issues, and ultimately it’s difficult to remediate them. We believe an important part of the problem lies in the fact that we lack a clear understanding of the real impact of any security issues. Without that understanding, it’s very difficult to ask developers to remediate critical security flaws.<p>We’ve built a unique approach to this problem, by looking at the impact of security issues through the lens of sensitive data. Interestingly, most security team ultimate responsibility today is to secure those sensitive data and protect their organization from costly data loss and leakage, but until today, that connection has never been made.<p>In practical terms, we provide a set of rules that assess the variety of ways known code vulnerabilities (CWE) ultimately impact your application security, and we reconcile it with your sensitive data flows. At the time of this writing, Bearer provides over 100 rules.<p>Here are some examples of what those rules can detect:
- Leakage of sensitive data through cookies, internal loggers, third-party logging services, and into analytics environments.
- Non-filtered user input that can lead to breaches of sensitive information.
- Usage of weak encryption libraries or misusage of encryption algorithms.
- Unencrypted incoming and outgoing communication (HTTP, FTP, SMTP) of sensitive information.
- Hard-coded secrets and tokens.
- And many you can find see here: <a href="https://docs.bearer.com/reference/rules/" rel="nofollow">https://docs.bearer.com/reference/rules/</a><p>Rules are easily extendable to allow you to create your own, everything is YAML based. For example, some of our early users used this system to detect the leakage of sensitive data in their backup environments or missing application-level encryption of their health data.<p>I’m sure you are wondering how can we detect sensitive data flows just by looking at the code. Essentially, we also perform static code analysis to detect those. In a nutshell, we look for those sensitive data flows at two levels:
- Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc.
- Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.<p>Then we pass this over to a classification engine that assess 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). All of that is documented here: <a href="https://docs.bearer.com/explanations/discovery-and-classification/" rel="nofollow">https://docs.bearer.com/explanations/discovery-and-classific...</a><p>As we said before, developer experience is key, that’s why you can install Bearer in 15 seconds, from cURL, Homebrew, apt-get, yum, or as a docker image. Then you run it as a CLI locally, or as part of your CI/CD.<p>We currently support JavaScript and Ruby stacks, but more will follow shortly!<p>Please let us know what you think and check out the repo here: <a href="https://github.com/Bearer/bearer">https://github.com/Bearer/bearer</a>
Show HN: Historical HN Hiring Data
Hi HN!
A few days ago I saw a graph[0] that showed the # of job postings on HN was declining. I started wondering what other trends I could glean from the data, so I created this!<p>You can filter through the top level comments by keyword; for example you can filter by "remote" to see the massive spike around March 2020. Another interesting thing I found is that I can compare hiring across cities.<p>I hope you enjoy! I made it so that the links to your search are sharable so if you have some interesting data you should be able to just link the page you're on!<p>[0] <a href="https://rinzewind.org/blog-en/2023/the-tech-downturn-seen-through-hacker-news-comments.html" rel="nofollow">https://rinzewind.org/blog-en/2023/the-tech-downturn-seen-th...</a>
Show HN: Historical HN Hiring Data
Hi HN!
A few days ago I saw a graph[0] that showed the # of job postings on HN was declining. I started wondering what other trends I could glean from the data, so I created this!<p>You can filter through the top level comments by keyword; for example you can filter by "remote" to see the massive spike around March 2020. Another interesting thing I found is that I can compare hiring across cities.<p>I hope you enjoy! I made it so that the links to your search are sharable so if you have some interesting data you should be able to just link the page you're on!<p>[0] <a href="https://rinzewind.org/blog-en/2023/the-tech-downturn-seen-through-hacker-news-comments.html" rel="nofollow">https://rinzewind.org/blog-en/2023/the-tech-downturn-seen-th...</a>
Show HN: I built a better UI for ChatGPT