The best Hacker News stories from Show from the past week

Go back

Latest posts:

Show HN: Infinity – Realistic AI characters that can speak

Hey HN, this is Lina, Andrew, and Sidney from Infinity AI (<a href="https://infinity.ai/">https://infinity.ai/</a>). We've trained our own foundation video model focused on people. As far as we know, this is the first time someone has trained a video diffusion transformer that’s driven by audio input. This is cool because it allows for expressive, realistic-looking characters that actually speak. Here’s a blog with a bunch of examples: <a href="https://toinfinityai.github.io/v2-launch-page/" rel="nofollow">https://toinfinityai.github.io/v2-launch-page/</a><p>If you want to try it out, you can either (1) go to <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a>, or (2) post a comment in this thread describing a character and we’ll generate a video for you and reply with a link. For example: “Mona Lisa saying ‘what the heck are you smiling at?’”: <a href="https://bit.ly/3z8l1TM" rel="nofollow">https://bit.ly/3z8l1TM</a> “A 3D pixar-style gnome with a pointy red hat reciting the Declaration of Independence”: <a href="https://bit.ly/3XzpTdS" rel="nofollow">https://bit.ly/3XzpTdS</a> “Elon Musk singing Fly Me To The Moon by Sinatra”: <a href="https://bit.ly/47jyC7C" rel="nofollow">https://bit.ly/47jyC7C</a><p>Our tool at Infinity allows creators to type out a script with what they want their characters to say (and eventually, what they want their characters to do) and get a video out. We’ve trained for about 11 GPU years (~$500k) so far and our model recently started getting good results, so we wanted to share it here. We are still actively training.<p>We had trouble creating videos of good characters with existing AI tools. Generative AI video models (like Runway and Luma) don’t allow characters to speak. And talking avatar companies (like HeyGen and Synthesia) just do lip syncing on top of the previously recorded videos. This means you often get facial expressions and gestures that don’t make sense with the audio, resulting in the “uncanny” look you can’t quite put your finger on. See blog.<p>When we started Infinity, our V1 model took the lip syncing approach. In addition to mismatched gestures, this method had many limitations, including a finite library of actors (we had to fine-tune a model for each one with existing video footage) and an inability to animate imaginary characters.<p>To address these limitations in V2, we decided to train an end-to-end video diffusion transformer model that takes in a single image, audio, and other conditioning signals and outputs video. We believe this end-to-end approach is the best way to capture the full complexity and nuances of human motion and emotion. One drawback of our approach is that the model is slow despite using rectified flow (2-4x speed up) and a 3D VAE embedding layer (2-5x speed up).<p>Here are a few things the model does surprisingly well on: (1) it can handle multiple languages, (2) it has learned some physics (e.g. it generates earrings that dangle properly and infers a matching pair on the other ear), (3) it can animate diverse types of images (paintings, sculptures, etc) despite not being trained on those, and (4) it can handle singing. See blog.<p>Here are some failure modes of the model: (1) it cannot handle animals (only humanoid images), (2) it often inserts hands into the frame (very annoying and distracting), (3) it’s not robust on cartoons, and (4) it can distort people’s identities (noticeable on well-known figures). See blog.<p>Try the model here: <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a><p>We’d love to hear what you think!

Show HN: Infinity – Realistic AI characters that can speak

Hey HN, this is Lina, Andrew, and Sidney from Infinity AI (<a href="https://infinity.ai/">https://infinity.ai/</a>). We've trained our own foundation video model focused on people. As far as we know, this is the first time someone has trained a video diffusion transformer that’s driven by audio input. This is cool because it allows for expressive, realistic-looking characters that actually speak. Here’s a blog with a bunch of examples: <a href="https://toinfinityai.github.io/v2-launch-page/" rel="nofollow">https://toinfinityai.github.io/v2-launch-page/</a><p>If you want to try it out, you can either (1) go to <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a>, or (2) post a comment in this thread describing a character and we’ll generate a video for you and reply with a link. For example: “Mona Lisa saying ‘what the heck are you smiling at?’”: <a href="https://bit.ly/3z8l1TM" rel="nofollow">https://bit.ly/3z8l1TM</a> “A 3D pixar-style gnome with a pointy red hat reciting the Declaration of Independence”: <a href="https://bit.ly/3XzpTdS" rel="nofollow">https://bit.ly/3XzpTdS</a> “Elon Musk singing Fly Me To The Moon by Sinatra”: <a href="https://bit.ly/47jyC7C" rel="nofollow">https://bit.ly/47jyC7C</a><p>Our tool at Infinity allows creators to type out a script with what they want their characters to say (and eventually, what they want their characters to do) and get a video out. We’ve trained for about 11 GPU years (~$500k) so far and our model recently started getting good results, so we wanted to share it here. We are still actively training.<p>We had trouble creating videos of good characters with existing AI tools. Generative AI video models (like Runway and Luma) don’t allow characters to speak. And talking avatar companies (like HeyGen and Synthesia) just do lip syncing on top of the previously recorded videos. This means you often get facial expressions and gestures that don’t make sense with the audio, resulting in the “uncanny” look you can’t quite put your finger on. See blog.<p>When we started Infinity, our V1 model took the lip syncing approach. In addition to mismatched gestures, this method had many limitations, including a finite library of actors (we had to fine-tune a model for each one with existing video footage) and an inability to animate imaginary characters.<p>To address these limitations in V2, we decided to train an end-to-end video diffusion transformer model that takes in a single image, audio, and other conditioning signals and outputs video. We believe this end-to-end approach is the best way to capture the full complexity and nuances of human motion and emotion. One drawback of our approach is that the model is slow despite using rectified flow (2-4x speed up) and a 3D VAE embedding layer (2-5x speed up).<p>Here are a few things the model does surprisingly well on: (1) it can handle multiple languages, (2) it has learned some physics (e.g. it generates earrings that dangle properly and infers a matching pair on the other ear), (3) it can animate diverse types of images (paintings, sculptures, etc) despite not being trained on those, and (4) it can handle singing. See blog.<p>Here are some failure modes of the model: (1) it cannot handle animals (only humanoid images), (2) it often inserts hands into the frame (very annoying and distracting), (3) it’s not robust on cartoons, and (4) it can distort people’s identities (noticeable on well-known figures). See blog.<p>Try the model here: <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a><p>We’d love to hear what you think!

Show HN: Wealthfolio: Private, open-source investment tracker

Thank you for your comments, just some context:<p>- The app is a simple desktop application that works on macOS, Windows, and Ubuntu.<p>- I developed this app for my own needs. Getting tired of SaaS app subscriptions and privacy concerns.<p>- For now, the activities are logged manually or imported from a CSV file. No integration with Plaid or other platforms.<p>- No monetization is planned for now (only a "buy me a coffee" if you use and appreciate the app).

Show HN: AnythingLLM – Open-Source, All-in-One Desktop AI Assistant

Hey HN!<p>This is Tim from AnythingLLM (<a href="https://github.com/Mintplex-Labs/anything-llm">https://github.com/Mintplex-Labs/anything-llm</a>). AnythingLLM is an open-source desktop assistant that brings together RAG (Retrieval-Augmented Generation), agents, embeddings, vector databases, and more—all in one seamless package.<p>We built AnythingLLM over the last year iterating and iterating from user feedback. Our primary mission is to enable people with a layperson understanding of AI to be able to use AI with little to no setup for either themselves, their jobs, or just to try out using AI as an assistant but with *privacy by default*.<p>From these iterations & feedback, we have a couple of key learnings I wanted to share:<p>- "Chat with your docs" solutions are a dime-a-dozen<p>- Agent frameworks require knowing how to code or are too isolated from other tools<p>- Users do not care about benchmarks, only outputs. The magic box needs to be magic to them.<p>- Asking Consumers to start a docker container or open a terminal is a non-starter for most.<p>- Privacy by default is non-negotiable. Either by personal preference or legal constraints<p>- Everything needs to be in one place<p>From these ideas, we landed on the current state of AnythingLLM:<p>- Everything in AnythingLLM is private by default, but fully customizable for advanced users.<p>- Built-in LLM provider, but can swap at any time to the hundreds of other local or cloud LLM providers & models.<p>- Built-in Vector Database, most users don't even know that it is there.<p>- Built-in Embedding model, but of course can change if the user wants to.<p>- Scrape websites, import Github/GitLab repos, YouTube Transcripts, Confluence spaces - all of this is already built in for the user.<p>- An entire baked-in agent framework that works seamlessly within the app. We even pre-built a handful of agent skills for customers. Custom plugins are in the next update and will be able to be built with code, or a no-code builder.<p>- All of this just works out of the box in a single installable app that can run on any consumer-grade laptop. Everything a user does, chats, or configures is stored on the user's device. Available for Mac, Windows, and Linux<p>We have been actively maintaining and working on AnythingLLM via our open-source repo for a while now and welcome contributors as we hopefully launch a Community Hub soon to really proliferate users' abilities to add more niche agent skills, data connectors, and more.<p>*But there is even more*<p>We view the desktop app as a hyper-accessible single-player version of AnythingLLM. We publish a Docker image too (<a href="https://hub.docker.com/r/mintplexlabs/anythingllm" rel="nofollow">https://hub.docker.com/r/mintplexlabs/anythingllm</a>) that supports multi-user management with permissioning so that you can easily bring AnythingLLM into an organization with all of the same features with minimal headache or lift.<p>The Docker image is for those more adept with a CLI, but being able to comfortably go from a single-user to a multi-user version of the same familiar app was very important for us.<p>AnythingLLM aims to be more than a UI for LLMs, we are building a comprehensive tool to leverage LLMs and all that they can do while maintaining user privacy and not needing to be an expert on AI to do it.<p><a href="https://anythingllm.com/" rel="nofollow">https://anythingllm.com/</a>

Show HN: An open-source implementation of AlphaFold3

Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.<p>Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (<a href="https://www.isomorphiclabs.com/articles/isomorphic-labs-kicks-off-2024-with-two-pharmaceutical-collaborations" rel="nofollow">https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...</a>)<p>AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.<p>AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.<p>There’s just one problem: when DeepMind published AlphaFold3 in May (<a href="https://www.nature.com/articles/s41586-024-07487-w" rel="nofollow">https://www.nature.com/articles/s41586-024-07487-w</a>), there was no code. This brought up questions about reproducibility (<a href="https://www.nature.com/articles/d41586-024-01463-0" rel="nofollow">https://www.nature.com/articles/d41586-024-01463-0</a>) as well as complaints from the scientific community (<a href="https://undark.org/2024/06/06/opinion-alphafold-3-open-source/" rel="nofollow">https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...</a>).<p>AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:<p>- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;<p>- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.<p>- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.<p>Unfortunately, no companies can use it since it is under a non-commercial license!<p>Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.<p>Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!<p>When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:<p>- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.<p>- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?<p>- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.<p>More about those issues here: <a href="https://github.com/Ligo-Biosciences/AlphaFold3">https://github.com/Ligo-Biosciences/AlphaFold3</a><p>How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.<p>For those on Twitter, there was a good thread a few days ago that has more information: <a href="https://twitter.com/ArdaGoreci/status/1830744265007480934" rel="nofollow">https://twitter.com/ArdaGoreci/status/1830744265007480934</a>.<p>A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (<a href="https://batisio.co.uk" rel="nofollow">https://batisio.co.uk</a>) for his amazing animations!<p>We’re around to answer questions and look forward to hearing from you!

Show HN: OBS Live-streaming with 120ms latency

Show HN: Repaint – a WebGL based website builder

Hey HN! We're Ben and Izak, founders of Repaint (YC S23). Repaint is like Figma, but for creating real websites.<p>It has panning, zooming, and dragging elements around. The settings closely follow html/css. We think an open canvas is a big improvement over other website builders. Everything is easier: styling, consistency, responsiveness…<p>But making it work was hard! We thought HN would appreciate the tech behind it: - A custom WebGL rendering engine (w/text, shadows, blurs, gradients, & opacity groups) - A partial implementation of the css spec - A custom text editor & text shaper - A transformer to turn designs into publishable html/css<p>Repaint is a design tool for html/css. But internally, it doesn’t actually use html/css itself. All your designs live in a single <canvas> element.<p>We wanted users to be able to see all their content on one screen. We target +60fps, so frames only have 16ms to render. The browser’s layout engine couldn’t handle simple actions, like zooming, with many thousands of nodes on the screen. To fix that, we wrote a rendering engine in WebGL.<p>Since we use custom rendering, we had to create a lot of built-in browser behavior ourselves.<p>Users modify a large dom-like data-structure in our editor. Each node in the document has a set of css-like styles. We created a layout engine that turns this document into a list of positions and sizes. We feed these values into the rendering engine. Our layout engine lets us display proper html layouts without using the browser's layout engine. We support both flex and block layouts. We already support multiple layout units and properties (px, %, mins, maxes, etc.).<p>We also can’t use the browser’s built-in text editor, so we made one ourselves. We implemented all the common text editor features regarding selection and content editing (clicking, selection, hotkeys, inline styling, etc.). The state consists of content and selection. The inputs are keystrokes, clicks, and style changes. The updated state is used to render text, selection boxes, and the cursor.<p>When you publish a website, we turn our internal document into an html document. We've intentionally structured our document to feel similar to a dom tree. Each node has a 1:1 mapping with an html element. Nodes have a list of breakpoints which represent media-queries. We apply the styles by turning them into selectors. All of the html pages are saved and stored on our fileserver for hosting.<p>We made a playground for HN, so you can try it yourself. Now that the tech works, we’d love feedback and ideas for improving the product experience.<p>And we’d love to meet more builders interested in the space. If that’s you, feel free to say hello in the comments! Or you can reach Ben from his website.<p>Playground: <a href="https://app.repaint.com/playground">https://app.repaint.com/playground</a><p>Demo Vid: <a href="https://www.loom.com/share/03ee81317c224189bfa202d3eacfa3c3?sid=094a4888-5ca7-4c4f-ba57-bb24cffe169c" rel="nofollow">https://www.loom.com/share/03ee81317c224189bfa202d3eacfa3c3?...</a><p>Main Website: <a href="https://repaint.com/">https://repaint.com/</a><p>Contact: <a href="https://benshumaker.xyz/" rel="nofollow">https://benshumaker.xyz/</a>

Show HN: Defrag the Game

Hi,<p>A while ago, I came across this <a href="https://www.youtube.com/watch?v=KR3TbL3Tl6M" rel="nofollow">https://www.youtube.com/watch?v=KR3TbL3Tl6M</a> on YouTube showing 8 hours of defragmenting a hard drive. For some reason, it inspired me to create this small game.<p>Have fun :)

Show HN: Linkpreview, see how your sites looks in social media and chat apps

Show HN: Linkpreview, see how your sites looks in social media and chat apps

Low Cost Mini PCs

While searching for mini PCs for my home server, I figured I'd use the eBay API to find the cheapest ones. Inspired by diskprices.com, I built a static site using Eleventy and a python script that uses regex to parse the data. I tried to include as many filters as possible like OS, Wifi, HDMI etc.<p>I would like to add power usage, noise levels, PCIe slots but that data is hard to find.<p>Please let me know if you have any feedback / suggestions.<p>Thanks!

Low Cost Mini PCs

While searching for mini PCs for my home server, I figured I'd use the eBay API to find the cheapest ones. Inspired by diskprices.com, I built a static site using Eleventy and a python script that uses regex to parse the data. I tried to include as many filters as possible like OS, Wifi, HDMI etc.<p>I would like to add power usage, noise levels, PCIe slots but that data is hard to find.<p>Please let me know if you have any feedback / suggestions.<p>Thanks!

Show HN: Typeform alternative, turns Markdown to forms

Show HN: Typeform alternative, turns Markdown to forms

Show HN: An open-source, local-first Webflow for your own app

Hey HN, I’m Kiet, and I’m one of the co-founders of Onlook – an open-sourced desktop app that lets you visually edit your locally running React app, then write your changes back to code in real time.<p>I posted the repo a few months ago [1] when it was just 2 weeks old. Since then, we’ve made some big changes/improvements. I wanted to share some of the updates we’ve made and add more technical details. Here are the three big ones:<p>• Inserting new elements - Draw elements in the live page like a design tool and write them back to code. • Component detection - Detect when an element is a re-used component and find its usages. • DOM tree representation - A layers panel similar to the Chrome devtool or Figma.<p>Technical details [2]:<p>Visual editing - Onlook is technically a browser that points to your localhost running the app. It can manipulate the DOM like a Chrome Devtool, and all these changes are injected into the page through a CSS stylesheet or DOM manipulation. The changes are non-persistent until written to code.<p>Write to code - To translate the changes to code, we inject an attribute into the DOM elements at build-time that points back to the code like a source map. The attribute gives us the location of the code block, and the component scope [3]. We then find the code, parse it into an AST, inject the styles, and write it back.<p>Framework support - This technique is framework agnostic as we can swap in a different compiler for another framework [4]. It can work for any codebase as we’re just using open standards that don’t require any custom code. The code generated is written directly into your codebase, locally, so you can always take the output without being locked-in to the tool.<p>Actions - All the changes made are stored as actions. This allows them to be serialized, stored, and reproduced. We did it this way so eventually, we can introduce online collaboration or let an agent generate actions. To do this, we’d just need to serve the locally running page and resolve incoming actions.<p>What’s next?<p>It’s still a bit bare-bones but the support and suggestions from the HN and open-source communities have helped us a lot with our direction. Now that we’ve built the core engine, we can start doing some cooler visual builder features, fulfilling the “Webflow” part of our mission such as [5]:<p>• Detecting CSS variables in the page and letting you use them as “design tokens” in the UI. • Duplicating a page and A/B testing designs before committing to code. • Creating new components directly in the canvas. • Creating a front-end project from scratch using Onlook.<p>Some things we’re considering, but aren’t sure about yet:<p>• Offer hosting directly from the app. • Collaboration such as real-time edits, comments, and share page as a prototype.<p>I’d love to hear your thoughts/feedback. This project continues to be a blast to work on and the community response has been awesome. Thank you to everyone who has tried out and contributed to the repo :)<p>_________<p>[1] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[2] <a href="https://github.com/onlook-dev/onlook/wiki/Architecture">https://github.com/onlook-dev/onlook/wiki/Architecture</a><p>[3] The attribute looks something like this:<p><pre><code> data-onlook-id="eJxNjUEKwzAMBP+ic6gOKT3k2i+kDzC2aEwcKVgyDQT/vU5pS067sMvMDl6WVZjYYIC7y2GMlgg6IA6je8LAJaUOVmdTO+BDKSvOkWwSfEme1+Q8oXASmVGthCgYaBFFps3wT1csEX3jX0y3hldz2T6C/VAd4SWVhWG4dpAiUyt9/R7Pc/+b+1ut9Q33rUM5" </code></pre> And decodes to this:<p><pre><code> {"component":"Dashboard","endTag":{"end":{"column":10,"line":620},"start":{"column":5,"line":620}},"path":"/Users/kietho/workplace/onlook/studio/demos/next/components/dashboard.tsx","startTag":{"end":{"column":67,"line":69},"start":{"column":5,"line":69}}} </code></pre> [4] We’re only supporting a few versions of React at the moment for early focus: <a href="https://github.com/onlook-dev/onlook/tree/main/demos">https://github.com/onlook-dev/onlook/tree/main/demos</a><p>[5] <a href="https://github.com/onlook-dev/onlook/wiki/Roadmap">https://github.com/onlook-dev/onlook/wiki/Roadmap</a>

Show HN: An open-source, local-first Webflow for your own app

Hey HN, I’m Kiet, and I’m one of the co-founders of Onlook – an open-sourced desktop app that lets you visually edit your locally running React app, then write your changes back to code in real time.<p>I posted the repo a few months ago [1] when it was just 2 weeks old. Since then, we’ve made some big changes/improvements. I wanted to share some of the updates we’ve made and add more technical details. Here are the three big ones:<p>• Inserting new elements - Draw elements in the live page like a design tool and write them back to code. • Component detection - Detect when an element is a re-used component and find its usages. • DOM tree representation - A layers panel similar to the Chrome devtool or Figma.<p>Technical details [2]:<p>Visual editing - Onlook is technically a browser that points to your localhost running the app. It can manipulate the DOM like a Chrome Devtool, and all these changes are injected into the page through a CSS stylesheet or DOM manipulation. The changes are non-persistent until written to code.<p>Write to code - To translate the changes to code, we inject an attribute into the DOM elements at build-time that points back to the code like a source map. The attribute gives us the location of the code block, and the component scope [3]. We then find the code, parse it into an AST, inject the styles, and write it back.<p>Framework support - This technique is framework agnostic as we can swap in a different compiler for another framework [4]. It can work for any codebase as we’re just using open standards that don’t require any custom code. The code generated is written directly into your codebase, locally, so you can always take the output without being locked-in to the tool.<p>Actions - All the changes made are stored as actions. This allows them to be serialized, stored, and reproduced. We did it this way so eventually, we can introduce online collaboration or let an agent generate actions. To do this, we’d just need to serve the locally running page and resolve incoming actions.<p>What’s next?<p>It’s still a bit bare-bones but the support and suggestions from the HN and open-source communities have helped us a lot with our direction. Now that we’ve built the core engine, we can start doing some cooler visual builder features, fulfilling the “Webflow” part of our mission such as [5]:<p>• Detecting CSS variables in the page and letting you use them as “design tokens” in the UI. • Duplicating a page and A/B testing designs before committing to code. • Creating new components directly in the canvas. • Creating a front-end project from scratch using Onlook.<p>Some things we’re considering, but aren’t sure about yet:<p>• Offer hosting directly from the app. • Collaboration such as real-time edits, comments, and share page as a prototype.<p>I’d love to hear your thoughts/feedback. This project continues to be a blast to work on and the community response has been awesome. Thank you to everyone who has tried out and contributed to the repo :)<p>_________<p>[1] <a href="https://news.ycombinator.com/item?id=40904862">https://news.ycombinator.com/item?id=40904862</a><p>[2] <a href="https://github.com/onlook-dev/onlook/wiki/Architecture">https://github.com/onlook-dev/onlook/wiki/Architecture</a><p>[3] The attribute looks something like this:<p><pre><code> data-onlook-id="eJxNjUEKwzAMBP+ic6gOKT3k2i+kDzC2aEwcKVgyDQT/vU5pS067sMvMDl6WVZjYYIC7y2GMlgg6IA6je8LAJaUOVmdTO+BDKSvOkWwSfEme1+Q8oXASmVGthCgYaBFFps3wT1csEX3jX0y3hldz2T6C/VAd4SWVhWG4dpAiUyt9/R7Pc/+b+1ut9Q33rUM5" </code></pre> And decodes to this:<p><pre><code> {"component":"Dashboard","endTag":{"end":{"column":10,"line":620},"start":{"column":5,"line":620}},"path":"/Users/kietho/workplace/onlook/studio/demos/next/components/dashboard.tsx","startTag":{"end":{"column":67,"line":69},"start":{"column":5,"line":69}}} </code></pre> [4] We’re only supporting a few versions of React at the moment for early focus: <a href="https://github.com/onlook-dev/onlook/tree/main/demos">https://github.com/onlook-dev/onlook/tree/main/demos</a><p>[5] <a href="https://github.com/onlook-dev/onlook/wiki/Roadmap">https://github.com/onlook-dev/onlook/wiki/Roadmap</a>

Show HN: Homemade automated solar concentrator

Hi HN!<p>I quit my job two years ago to have more time to work on my side projects.<p>The main one is an automated solar concentrator.<p>I've just open-sourced it, it's not perfect nor finished, and I still have a lot of ideas for further development, but I'm interested in knowing what you think of it.<p>There are many applications where concentrated solar power could be a viable environmental and economic solution, I hope this technology will one day be more widely used.<p>Feel free to give any feedback and ask questions.

Show HN: Homemade automated solar concentrator

Hi HN!<p>I quit my job two years ago to have more time to work on my side projects.<p>The main one is an automated solar concentrator.<p>I've just open-sourced it, it's not perfect nor finished, and I still have a lot of ideas for further development, but I'm interested in knowing what you think of it.<p>There are many applications where concentrated solar power could be a viable environmental and economic solution, I hope this technology will one day be more widely used.<p>Feel free to give any feedback and ask questions.

Show HN: IPA, a GUI for exploring inner details of PDFs

Show HN: IPA, a GUI for exploring inner details of PDFs

< 1 2 3 ... 5 6 7 8 9 ... 122 123 124 >