The best Hacker News stories from Show from the past week
Latest posts:
Show HN: ustaxes.org – open-source tax filing webapp
Show HN: M1 Chart – The stock market adjusted for the US-dollar money supply
Show HN: Khan-dl – Khan Academy Course Downloader
Show HN: Search inside YouTube videos using natural language queries
Launch HN: Wasp (YC W21) – DSL for building full-stack web apps
Hi HN!<p>We are Martin and Matija, twin brothers and creators of Wasp (<a href="https://wasp-lang.dev" rel="nofollow">https://wasp-lang.dev</a>). Wasp is a declarative language that makes it really easy to build full-stack web apps while still using the latest technologies such as React, Node.js and Prisma.<p>Martin and I both studied computer science where we mostly focused on algorithms for bioinformatics. Afterwards we led engineering teams in several SaaS companies, on the way gaining plenty of experience in building web apps.<p>Moving from one project to another, we used various technologies:
JQuery -> Backbone -> Angular -> React, own scripts / makefile -> Grunt -> Gulp -> Webpack, PHP -> Java -> Node.js, … , and we always felt that things are harder than they should be. We were spending a lot of time adopting the latest tech stack and figuring out the best practices: how to make the web app performant, scalable, economical and secure and also how to connect all the pieces of the stack together.<p>While the tech stack kept advancing rapidly, the core requirements of the apps we were building changed very little (auth, routing, data model CRUD, ACL, …). That is why about 1.5 years ago we started thinking about separating web app specification (what it should do) from its implementation (how it should do it).<p>This led us to the idea of extracting common web app features and concepts into a special specification language from which we could generate code in the currently popular technologies. We don’t think it is feasible to replace everything with a single language so that is why we went with a DSL which integrates with the modern stack (right now React, NodeJS, Prisma).<p>Wasp lets you define high-level aspects of your web app (auth, routing, ACL, data models, CRUD) via a simple specification language and then write your specific logic in React and Node.js. The majority of the code is still being written in React and Node.js, with Wasp serving as the backbone of your whole application. To see some examples of what the language looks like in practice, take a look here: <a href="https://github.com/wasp-lang/wasp/blob/master/examples/tutorials/TodoApp/main.wasp" rel="nofollow">https://github.com/wasp-lang/wasp/blob/master/examples/tutor...</a><p>The main difference between Wasp and frameworks (e.g. Meteor, Blitz, Redwood) is that Wasp is a language, not a library. One benefit of that is a simpler and cleaner, declarative syntax, focused on the requirements and detached from the implementation details.<p>Another benefit of a DSL is that it allows Wasp to understand the web app’s requirements during the build time and reason about it before generating the final code. For example, when generating code to be deployed to production, it could pick the most appropriate architecture based on its understanding of the web app and deploy it to serverless or another type of architecture (or even a combination). Another example would be reusing your data model logic through all the parts of the stack while defining it just once in Wasp. DSL opens the potential for optimisations, static analysis and extensibility.<p>Wasp’s compiler is built in Haskell and it compiles the source code in Wasp + React/Node.js into the target code in just React and Node.js (currently in Javascript, but we plan to move to Typescript soon). The generated code is human readable and can easily be inspected and even ejected if Wasp becomes too limiting.<p>We are currently in Alpha and many features are still rough or missing, but you can try it out and build and deploy web apps! There are things we haven’t solved yet and others that will probably change as we progress.<p>You can check out our repo at <a href="https://github.com/wasp-lang/wasp" rel="nofollow">https://github.com/wasp-lang/wasp</a> and give it a try at <a href="https://wasp-lang.dev/docs/" rel="nofollow">https://wasp-lang.dev/docs/</a>.<p>Thank you for reading! We would love to get your feedback and also hear about your experiences building web apps - what has worked for you and where do you see the opportunities for improvement?
Show HN: I wrote an entire book to build a mouseless dev environment
Launch HN: SigNoz (YC W21) – Open-source alternative to DataDog
Hi HN,<p>Pranay and Ankit here. We’re founders of SigNoz ( <a href="https://signoz.io" rel="nofollow">https://signoz.io</a> ), an open source observability platform. We are building an open-core alternative to DataDog for companies that are security and privacy conscious, and are concerned about huge bills they need to pay to SaaS observability vendors.<p>Observability means being able to monitor your application components - from mobile and web front-ends to infrastructure, and being able to ask questions about their states. Things like latency, error rates, RPS, etc. Better observability helps developers find the cause of issues in their deployed software and solve them quickly.<p>Ankit was leading an engineering team, where we became aware of the importance of observability in a microservices system where each service depended on the health of multiple other services. And we saw that this problem was getting more and more important, esp. in today’s world of distributed systems.<p>The journey of SigNoz started with our own pain point. I was working in a startup in India. We didn’t use application monitoring (APM) tools like DataDog/NewRelic as it was very costly, though we badly needed it. We had many customers complaining about broken APIs or a payment not processing - and we had to get into war room mode to solve it. Having a good observability system would have allowed us to solve these issues much more quickly.<p>Not having any solution which met our needs, we set out to do something about this.<p>In our initial exploration, we tried setting up RED (Rate, Error and Duration) and infra metrics using Prometheus. But we soon realized that metrics can only give you an aggregate overview of systems. You need to debug why these metrics went haywire. This led us to explore Jaeger, an open source distributed tracing system.<p>Key issues with Jaeger were that there was no concept of metrics in Jaegers, and datastores supported by Jaeger lacked aggregation capabilities. For example, if you had tags of “customer_type: premium” for your premium customers, you couldn’t find p99 latency experienced by them through Jaeger.<p>We found that though there are many backend products - an open source product with UI custom-built for observability, which integrates metrics & traces, was missing.<p>Also, some folks we talked to expressed concern about sending data outside of boundaries - and we felt that with increasing privacy regulations, this would become more critical. We thought there was scope for an open source solution that addresses these points.<p>We think that currently there is a huge gap between the state of SaaS APM products and OSS products. There is a scope for open core products which is open source but also supports enterprise scale and comes with support and advanced features.<p>Some of our key features - (1) Seamless UI to track metrics and traces (2) Ability to get metrics for business-relevant queries, e.g. latency faced by premium customers (3) Aggregates on filtered traces, etc.<p>We plan to focus next on building native alert managers, support for custom metrics and then logs ( waiting for open telemetry logs to mature more in this). More details about our roadmap here ( <a href="https://signoz.io/docs/roadmap" rel="nofollow">https://signoz.io/docs/roadmap</a> )<p>We are based on Golang & React. The design of SigNoz is inspired by streaming data architecture. Data is ingested to Kafka and relevant info & meta-data is extracted by stream processing. Any number of processors can be built as per business needs. Processed data is ingested to real-time analytics datastore, Apache Druid, which powers aggregates on slicing and dicing of high dimensional data. In the initial benchmarks we did for self-hosting SigNoz, we found that it would be 10x more cost-effective than SaaS vendors ( <a href="https://signoz.io/blog/signoz-benchmarks/" rel="nofollow">https://signoz.io/blog/signoz-benchmarks/</a> )<p>We’ve launched this repo under MIT license so any developer can use the tool. The goal is to not charge individual developers & small teams. We eventually plan on making a licensed version where we charge for features that large companies care about like advanced security, single sign-on, advanced integrations and support.<p>You can check out our repo at <a href="https://github.com/SigNoz/signoz" rel="nofollow">https://github.com/SigNoz/signoz</a> We have a ton of features in mind and would love you to try it and let us know your feedback!
Show HN: Clerk – all of user management as-a-service, not just authentication
Show HN: Stamp turns a folder into a plain text file and a file into a folder
Show HN: LibreTranslate – Open-source neural machine translation API
Show HN: Epub.to – ePub to pdf, mobi, Kindle, and an API
Show HN: Epub.to – ePub to pdf, mobi, Kindle, and an API
Show HN: Ht – HTTPie Clone in Rust
Show HN: Ht – HTTPie Clone in Rust
Show HN: Remarkbox – Hosted comments without ads or tracking
Show HN: YTT Tech – My curated database of instructional YouTube Videos
Show HN: Haven – Run a private website to share with only the people you choose
Launch HN: Albedo (YC W21) – Highest resolution satellite imagery
Hey HN! I’m Topher, here with Winston and AJ, and we’re the co-founders of Albedo (<a href="https://albedo.space" rel="nofollow">https://albedo.space</a>). We’re building satellites that will capture both visible and thermal imagery - at a resolution 9x higher than what is available today (see comparison: <a href="https://photos.app.goo.gl/gwokp4WT8JPvyue98" rel="nofollow">https://photos.app.goo.gl/gwokp4WT8JPvyue98</a>).<p>My technical background is primarily in optics/imaging science related to remote sensing. I previously worked for Lockheed Martin, where I met AJ, who is an expert in satellite architecture and systems engineering. We’ve spent most of our career working on classified space systems, and while the missions we were involved with are super cool, that world is slower to adopt the latest new space technologies. We started Albedo in order to create a new type of satellite architecture that captures high resolution imagery at a fraction of the cost historically required. Winston was previously a software engineer at Facebook, where he frequently used satellite imagery and realized the huge potential of higher resolution datasets.<p>While the use cases for satellite imagery are endless, adoption has been underwhelming - even for obvious and larger applications like agriculture, insurance, energy, and mapping. The main limitations that have prevented widespread use are high cost, inaccessibility, and low resolution.<p>Today, buying commercial satellite imagery involves a back-and-forth with a salesperson in a sometimes months-long process, with high prices that exclude all but the biggest companies. This process needs to be simplified with transparent, commodity pricing and an automated process, where all you need to buy imagery is a credit card. On the accessibility front, it’s surprising how few providers have nailed down a streamlined, fully cloud-based delivery mechanism. While working at Facebook, Winston sometimes dealt with imagery delivered through FTP servers or physical hard drives. Another thing users are looking for is more transparency when tasking a new satellite image, such as an immediate assessment of when it will be collected. These are all problems we are working on solving at Albedo.<p>On the space side, we’re able to achieve the substantial cost savings by taking advantage of emerging space technologies, two of which are electric propulsion and on-orbit refueling. Our satellites will fly super close to the earth, essentially in the atmosphere, enabling 10cm resolution without having to build a school bus sized satellite.<p>Electric propulsion makes the fuel on our satellites way more efficient, at the expense of low thrust. Think about it like your car gasoline going from 30 to 300 mpg, but you could only drive 5 mph. Our propulsion only needs to maintain a steady offset to the atmospheric drag, so low thrust and high efficiency is perfect. By the time our first few satellites run out of fuel, on-orbit refueling will be a reality, and we can just refill our tanks. We’re still in the architecture and design phase, but we expect to have our first few satellites flying in 2024 and the full constellation up in 2027.<p>The current climate crisis requires a diverse set of sensors in space to support emissions monitoring, ESG initiatives/investments, and infrastructure sustainability. Thermal sensors are a key component for this, and very few are currently in orbit. Since our satellites are larger than normal, they are uniquely suited to capture the long wavelengths of thermal energy at a resolution of 2 meters. We’ll also be taking advantage of advances in microbolometer technology, to eliminate the crazy cooling requirements that have made thermal satellites so expensive in the past. The current state-of-the-art for thermal resolution is 70 meters, which is only marginally useful for most applications.<p>We’re aiming to adopt the stance of being a pure data provider (i.e. not doing analytics). We think the best way to facilitate overall market growth is to do one thing incredibly well: sell imagery better, cheaper, and faster than what users have available today. While this allows us to be vertical agnostic, some of our more well-suited applications include: crop health monitoring, pipeline inspection, property insurance underwriting/weather damage evaluation, and wildfire/vegetation management around power lines. By making high-res imagery a commodity, we are also betting on it unlocking new applications in a similar fashion to GPS (e.g. Tinder, Pokemon Go, and Uber).<p>One last thing - new remote sensing regulations were released by NOAA last May, removing the previous limit on resolution. So between the technology side and regulatory side, the timing is kind of perfect for us.<p>All thoughts and questions are appreciated - and we’d love to hear if you know of any companies that could benefit from our imagery. Thanks for reading!
Launch HN: Opstrace (YC S19) – open-source Datadog
Hi HN!<p>Seb here, with my co-founder Mat. We are building an open-source observability platform aimed at the end user. We assemble what we consider the best open source APIs and interfaces such as Prometheus and Grafana, but make them as easy to use and featureful as Datadog, with for example TLS and authentication by default. It's scalable (horizontally and vertically) and upgradable without a team of experts. Check it out here: <a href="http://opstrace.com/" rel="nofollow">http://opstrace.com/</a> & <a href="https://github.com/opstrace/opstrace" rel="nofollow">https://github.com/opstrace/opstrace</a><p>About us: I co-founded dotCloud which became Docker, and was also an early employee at Cloudflare where I built their monitoring system back when there was no Prometheus (I had to use OpenTSDB :-). I have since been told it's all been replaced with modern stuff—thankfully! Mat and I met at Mesosphere where, after building DC/OS, we led the teams that would eventually transition the company to Kubernetes.<p>In 2019, I was at RedHat and Mat was still at Mesosphere. A few months after IBM announced purchasing RedHat, Mat and I started brainstorming problems that we could solve in the infrastructure space. We started interviewing a lot of companies, always asking them the same questions: "How do you build and test your code? How do you deploy? What technologies do you use? How do you monitor your system? Logs? Outages?" A clear set of common problems emerged.<p>Companies that used external vendors—such as CloudWatch, Datadog, SignalFX—grew to a certain size where cost became unpredictable and wildly excessive. As a result (one of many downsides we would come to uncover) they monitored less (i.e. just error logs, no real metrics/logs in staging/dev and turning metrics off in prod to reduce cost).<p>Companies going the opposite route—choosing to build in-house with open source software—had different problems. Building their stack took time away from their product development, and resulted in poorly maintained, complicated messes. Those companies are usually tempted to go to SaaS but at their scale, the cost is often prohibitive.<p>It seemed crazy to us that we are still stuck in this world where we have to choose between these two paths. As infrastructure engineers, we take pride in building good software for other engineers. So we started Opstrace to fix it.<p>Opstrace started with a few core principles: (1) The customer should always own their data; Opstrace runs entirely in your cloud account and your data never leaves your network. (2) We don’t want to be a storage vendor—that is, we won’t bill customers by data volume because this creates the wrong incentives for us. (AWS and GCP are already pretty good at storage.)
(3) Transparency and predictability of costs—you pay your cloud provider for the storage/network/compute for running Opstrace and can take advantage of any credits/discounts you negotiate with them. We are incentivized to help you understand exactly where you are spending money because you pay us for the value you get from our product with per-user pricing. (For more about costs, see our recent blog post here: <a href="https://opstrace.com/blog/pulling-cost-curtain-back" rel="nofollow">https://opstrace.com/blog/pulling-cost-curtain-back</a>). (4) It should be REAL Open Source with the Apache License, Version 2.0.<p>To get started, you install Opstrace into your AWS or GCP account with one command: `opstrace create`. This installs Opstrace in your account, creates a domain name and sets up authentication for you for free. Once logged in you can create tenants that each contain APIs for Prometheus, Fluentd/Loki and more. Each tenant has a Grafana instance you can use. A tenant can be used to logically separate domains, for example, things like prod, test, staging or teams. Whatever you prefer.<p>At the heart of Opstrace runs a Cortex (<a href="https://github.com/cortexproject/cortex" rel="nofollow">https://github.com/cortexproject/cortex</a>) cluster to provide the above-mentioned scalable Prometheus API, and a Loki (<a href="https://github.com/grafana/loki" rel="nofollow">https://github.com/grafana/loki</a>) cluster for the logs. We front those with authenticated endpoints (all public in our repo). All the data ends up stored only in S3 thanks to the amazing work of the developers on those projects.<p>An "open source Datadog" requires more than just metrics and logs. We are actively working on a new UI for managing, querying and visualizing your data and many more features, like automatic ingestion of logs/metrics from cloud services (CloudWatch/Stackdriver), Datadog compatible API endpoints to ease migrations and side by side comparisons and synthetics (e.g. Pingdom). You can follow along on our public roadmap: <a href="https://opstrace.com/docs/references/roadmap" rel="nofollow">https://opstrace.com/docs/references/roadmap</a>.<p>We will always be open source, and we make money by charging a per-user subscription for our commercial version which will contain fine-grained authz, bring-your-own OIDC and custom domains.<p>Check out our repo (<a href="https://github.com/opstrace/opstrace" rel="nofollow">https://github.com/opstrace/opstrace</a>) and give it a spin (<a href="https://opstrace.com/docs/quickstart" rel="nofollow">https://opstrace.com/docs/quickstart</a>).<p>We’d love to hear what your perspective is. What are your experiences related to the problems discussed here? Are you all happy with the tools you’re using today?
Show HN: Collection of deep learning implementations with side-by-side notes