The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: An open-source implementation of AlphaFold3

Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.<p>Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (<a href="https://www.isomorphiclabs.com/articles/isomorphic-labs-kicks-off-2024-with-two-pharmaceutical-collaborations" rel="nofollow">https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...</a>)<p>AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.<p>AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.<p>There’s just one problem: when DeepMind published AlphaFold3 in May (<a href="https://www.nature.com/articles/s41586-024-07487-w" rel="nofollow">https://www.nature.com/articles/s41586-024-07487-w</a>), there was no code. This brought up questions about reproducibility (<a href="https://www.nature.com/articles/d41586-024-01463-0" rel="nofollow">https://www.nature.com/articles/d41586-024-01463-0</a>) as well as complaints from the scientific community (<a href="https://undark.org/2024/06/06/opinion-alphafold-3-open-source/" rel="nofollow">https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...</a>).<p>AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:<p>- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;<p>- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.<p>- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.<p>Unfortunately, no companies can use it since it is under a non-commercial license!<p>Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.<p>Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!<p>When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:<p>- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.<p>- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?<p>- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.<p>More about those issues here: <a href="https://github.com/Ligo-Biosciences/AlphaFold3">https://github.com/Ligo-Biosciences/AlphaFold3</a><p>How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.<p>For those on Twitter, there was a good thread a few days ago that has more information: <a href="https://twitter.com/ArdaGoreci/status/1830744265007480934" rel="nofollow">https://twitter.com/ArdaGoreci/status/1830744265007480934</a>.<p>A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (<a href="https://batisio.co.uk" rel="nofollow">https://batisio.co.uk</a>) for his amazing animations!<p>We’re around to answer questions and look forward to hearing from you!

Show HN: An open-source implementation of AlphaFold3

Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.<p>Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (<a href="https://www.isomorphiclabs.com/articles/isomorphic-labs-kicks-off-2024-with-two-pharmaceutical-collaborations" rel="nofollow">https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...</a>)<p>AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.<p>AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.<p>There’s just one problem: when DeepMind published AlphaFold3 in May (<a href="https://www.nature.com/articles/s41586-024-07487-w" rel="nofollow">https://www.nature.com/articles/s41586-024-07487-w</a>), there was no code. This brought up questions about reproducibility (<a href="https://www.nature.com/articles/d41586-024-01463-0" rel="nofollow">https://www.nature.com/articles/d41586-024-01463-0</a>) as well as complaints from the scientific community (<a href="https://undark.org/2024/06/06/opinion-alphafold-3-open-source/" rel="nofollow">https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...</a>).<p>AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:<p>- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;<p>- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.<p>- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.<p>Unfortunately, no companies can use it since it is under a non-commercial license!<p>Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.<p>Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!<p>When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:<p>- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.<p>- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?<p>- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.<p>More about those issues here: <a href="https://github.com/Ligo-Biosciences/AlphaFold3">https://github.com/Ligo-Biosciences/AlphaFold3</a><p>How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.<p>For those on Twitter, there was a good thread a few days ago that has more information: <a href="https://twitter.com/ArdaGoreci/status/1830744265007480934" rel="nofollow">https://twitter.com/ArdaGoreci/status/1830744265007480934</a>.<p>A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (<a href="https://batisio.co.uk" rel="nofollow">https://batisio.co.uk</a>) for his amazing animations!<p>We’re around to answer questions and look forward to hearing from you!

Show HN: An open-source implementation of AlphaFold3

Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.<p>Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (<a href="https://www.isomorphiclabs.com/articles/isomorphic-labs-kicks-off-2024-with-two-pharmaceutical-collaborations" rel="nofollow">https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...</a>)<p>AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.<p>AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.<p>There’s just one problem: when DeepMind published AlphaFold3 in May (<a href="https://www.nature.com/articles/s41586-024-07487-w" rel="nofollow">https://www.nature.com/articles/s41586-024-07487-w</a>), there was no code. This brought up questions about reproducibility (<a href="https://www.nature.com/articles/d41586-024-01463-0" rel="nofollow">https://www.nature.com/articles/d41586-024-01463-0</a>) as well as complaints from the scientific community (<a href="https://undark.org/2024/06/06/opinion-alphafold-3-open-source/" rel="nofollow">https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...</a>).<p>AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:<p>- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;<p>- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.<p>- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.<p>Unfortunately, no companies can use it since it is under a non-commercial license!<p>Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.<p>Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!<p>When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:<p>- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.<p>- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?<p>- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.<p>More about those issues here: <a href="https://github.com/Ligo-Biosciences/AlphaFold3">https://github.com/Ligo-Biosciences/AlphaFold3</a><p>How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.<p>For those on Twitter, there was a good thread a few days ago that has more information: <a href="https://twitter.com/ArdaGoreci/status/1830744265007480934" rel="nofollow">https://twitter.com/ArdaGoreci/status/1830744265007480934</a>.<p>A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (<a href="https://batisio.co.uk" rel="nofollow">https://batisio.co.uk</a>) for his amazing animations!<p>We’re around to answer questions and look forward to hearing from you!

Show HN: OBS Live-streaming with 120ms latency

Show HN: OBS Live-streaming with 120ms latency

Show HN: Icebreaking AI. A free tool to help you find close friends

Hello, everyone!<p>My name is Alex, and I'm an expat in a new country. Two years ago, I moved to Germany where I didn’t know anyone. Starting the job, I realized that becoming close friends with colleagues can be quite challenging and slow. The feeling of loneliness was gradually growing.<p>To change that, I started attending various meetups based on my interests, hoping to make new friends. However, I often found myself answering the same generic questions that didn’t really help me understand if someone shared my values and interests. As a result, lost energy and 0 friends.<p>SO - That’s when I began using icebreaking questions in smaller groups. Surprisingly, these questions quickly changed any conversation to a deeper, more engaging and meaningful level. I was able to identify people I genuinely connected with and started spending more time with them.<p>BUT - preparing and using these questions was a pain. I would Google the top-100, pick a random number, and choose a question. I felt a strong need for a simple „one button“ that does everything for me.<p>AND - After searching and not finding such a solution, I decided to build it myself. I created a vast database of questions across various topics and integrated AI to generate questions on any subject.<p>The idea has resonated well within my communities (university and expat parties), and more people are starting to use it. I’d love to hear your stories with icebreaking questions, your experience, and your feedback so I can make the service even more useful for more people!<p>Check it out and share your thoughts!

Show HN: PlasCAD: Open-source plasmid editor

Hey! This is an open source plasmid and vector (Short, often circular segments of DNA) editor, with features related to primer quality checks, PCR cloning, and protein analysis. I plan to add more restriction-enzyme-based features in the near future. It has some extras like a solution-mixing helper, automatic feature annotation.<p>From a technical standpoint, this is a standalone binary written using the EGUI library in rust. A project goal is performance, with small memory footprint, and small application and file sizes.<p>This is a continuous work-in-progress, and I'm open to any and all feedback, criticism, and requested features.

Show HN: Shehzadi in Peril – My first ever game

Hello HN! This is the first game I ever built. It's very simple, but I'm still kind of proud of it because all the pixel art is original. Thanks for taking a look!<p>GitHub link: <a href="https://github.com/shajidhasan/shehzadi-in-peril">https://github.com/shajidhasan/shehzadi-in-peril</a>

Show HN: Shehzadi in Peril – My first ever game

Hello HN! This is the first game I ever built. It's very simple, but I'm still kind of proud of it because all the pixel art is original. Thanks for taking a look!<p>GitHub link: <a href="https://github.com/shajidhasan/shehzadi-in-peril">https://github.com/shajidhasan/shehzadi-in-peril</a>

Show HN: Hestus – AI Copilot for CAD

Hello! We’re Kevin and Sohrab from Hestus (<a href="https://www.hestus.co" rel="nofollow">https://www.hestus.co</a>). We're working on an AI copilot for CAD. Today we're releasing a simple sketch helper for Fusion 360 and would love your feedback. Here’s a quick demo: <a href="https://www.youtube.com/watch?v=L9n_eY-fM_E" rel="nofollow">https://www.youtube.com/watch?v=L9n_eY-fM_E</a>.<p>Why we’re doing this: Mechanical engineers excel at generating initial design concepts but get bogged down translating ideas into final designs due to tedious, repetitive tasks. Our goal is to automate these mundane processes, allowing engineers to focus on the creative aspects of design.<p>Having worked at multiple hardware companies—from medical devices to space launch vehicles—we know how often “trivial” components such as manufacturing rigging, get brushed under the table in scheduling conversations. These tasks aren’t necessarily complex, but they take time and still require the rigor of production components. From finding the perfect fastener to making sure mounting holes align, we aim to simplify and accelerate the design process from the complex to the mundane.<p>We're tackling this problem similarly to how coding copilots help programmers work faster. Initially, rudimentary coding assistants offered simple suggestions like auto-completing variables. Now, they understand complex tasks, write entire code blocks, and help fix bugs. We're taking this step-by-step approach, starting with a beta that focuses on sketching.<p>Our sketch helper offers design suggestions, such as applying equality constraints to similarly sized circles or adding tangent constraints between lines and curves. While designers can do these tasks manually, they often require dozens of precise mouse clicks. Our software makes suggestions that you can preview and accept to streamline your workflow. Over time we aim to improve at anticipating your needs and expand beyond sketching to other design aspects like resolving interference issues, auto-generating bills of materials with purchase links, and offering manufacturability suggestions.<p>How this is different from other solutions: we've heard of complete generative part design solutions, but we don't believe this top down approach is the best way to assist mechanical engineers. Engineers excel at and enjoy designing new concepts—we want to focus on streamlining the most tedious aspects. Crucially, we find that generative solutions often overlook manufacturability, a key aspect of design.<p>We invite you to try our sketch helper and share your thoughts! If you can think of any additional features that would make it more useful to you, we’d love to hear what they are. Any and all feedback is welcome!

Show HN: Hestus – AI Copilot for CAD

Hello! We’re Kevin and Sohrab from Hestus (<a href="https://www.hestus.co" rel="nofollow">https://www.hestus.co</a>). We're working on an AI copilot for CAD. Today we're releasing a simple sketch helper for Fusion 360 and would love your feedback. Here’s a quick demo: <a href="https://www.youtube.com/watch?v=L9n_eY-fM_E" rel="nofollow">https://www.youtube.com/watch?v=L9n_eY-fM_E</a>.<p>Why we’re doing this: Mechanical engineers excel at generating initial design concepts but get bogged down translating ideas into final designs due to tedious, repetitive tasks. Our goal is to automate these mundane processes, allowing engineers to focus on the creative aspects of design.<p>Having worked at multiple hardware companies—from medical devices to space launch vehicles—we know how often “trivial” components such as manufacturing rigging, get brushed under the table in scheduling conversations. These tasks aren’t necessarily complex, but they take time and still require the rigor of production components. From finding the perfect fastener to making sure mounting holes align, we aim to simplify and accelerate the design process from the complex to the mundane.<p>We're tackling this problem similarly to how coding copilots help programmers work faster. Initially, rudimentary coding assistants offered simple suggestions like auto-completing variables. Now, they understand complex tasks, write entire code blocks, and help fix bugs. We're taking this step-by-step approach, starting with a beta that focuses on sketching.<p>Our sketch helper offers design suggestions, such as applying equality constraints to similarly sized circles or adding tangent constraints between lines and curves. While designers can do these tasks manually, they often require dozens of precise mouse clicks. Our software makes suggestions that you can preview and accept to streamline your workflow. Over time we aim to improve at anticipating your needs and expand beyond sketching to other design aspects like resolving interference issues, auto-generating bills of materials with purchase links, and offering manufacturability suggestions.<p>How this is different from other solutions: we've heard of complete generative part design solutions, but we don't believe this top down approach is the best way to assist mechanical engineers. Engineers excel at and enjoy designing new concepts—we want to focus on streamlining the most tedious aspects. Crucially, we find that generative solutions often overlook manufacturability, a key aspect of design.<p>We invite you to try our sketch helper and share your thoughts! If you can think of any additional features that would make it more useful to you, we’d love to hear what they are. Any and all feedback is welcome!

Show HN: Repaint – a WebGL based website builder

Hey HN! We're Ben and Izak, founders of Repaint (YC S23). Repaint is like Figma, but for creating real websites.<p>It has panning, zooming, and dragging elements around. The settings closely follow html/css. We think an open canvas is a big improvement over other website builders. Everything is easier: styling, consistency, responsiveness…<p>But making it work was hard! We thought HN would appreciate the tech behind it: - A custom WebGL rendering engine (w/text, shadows, blurs, gradients, & opacity groups) - A partial implementation of the css spec - A custom text editor & text shaper - A transformer to turn designs into publishable html/css<p>Repaint is a design tool for html/css. But internally, it doesn’t actually use html/css itself. All your designs live in a single <canvas> element.<p>We wanted users to be able to see all their content on one screen. We target +60fps, so frames only have 16ms to render. The browser’s layout engine couldn’t handle simple actions, like zooming, with many thousands of nodes on the screen. To fix that, we wrote a rendering engine in WebGL.<p>Since we use custom rendering, we had to create a lot of built-in browser behavior ourselves.<p>Users modify a large dom-like data-structure in our editor. Each node in the document has a set of css-like styles. We created a layout engine that turns this document into a list of positions and sizes. We feed these values into the rendering engine. Our layout engine lets us display proper html layouts without using the browser's layout engine. We support both flex and block layouts. We already support multiple layout units and properties (px, %, mins, maxes, etc.).<p>We also can’t use the browser’s built-in text editor, so we made one ourselves. We implemented all the common text editor features regarding selection and content editing (clicking, selection, hotkeys, inline styling, etc.). The state consists of content and selection. The inputs are keystrokes, clicks, and style changes. The updated state is used to render text, selection boxes, and the cursor.<p>When you publish a website, we turn our internal document into an html document. We've intentionally structured our document to feel similar to a dom tree. Each node has a 1:1 mapping with an html element. Nodes have a list of breakpoints which represent media-queries. We apply the styles by turning them into selectors. All of the html pages are saved and stored on our fileserver for hosting.<p>We made a playground for HN, so you can try it yourself. Now that the tech works, we’d love feedback and ideas for improving the product experience.<p>And we’d love to meet more builders interested in the space. If that’s you, feel free to say hello in the comments! Or you can reach Ben from his website.<p>Playground: <a href="https://app.repaint.com/playground">https://app.repaint.com/playground</a><p>Demo Vid: <a href="https://www.loom.com/share/03ee81317c224189bfa202d3eacfa3c3?sid=094a4888-5ca7-4c4f-ba57-bb24cffe169c" rel="nofollow">https://www.loom.com/share/03ee81317c224189bfa202d3eacfa3c3?...</a><p>Main Website: <a href="https://repaint.com/">https://repaint.com/</a><p>Contact: <a href="https://benshumaker.xyz/" rel="nofollow">https://benshumaker.xyz/</a>

Show HN: Repaint – a WebGL based website builder

Hey HN! We're Ben and Izak, founders of Repaint (YC S23). Repaint is like Figma, but for creating real websites.<p>It has panning, zooming, and dragging elements around. The settings closely follow html/css. We think an open canvas is a big improvement over other website builders. Everything is easier: styling, consistency, responsiveness…<p>But making it work was hard! We thought HN would appreciate the tech behind it: - A custom WebGL rendering engine (w/text, shadows, blurs, gradients, & opacity groups) - A partial implementation of the css spec - A custom text editor & text shaper - A transformer to turn designs into publishable html/css<p>Repaint is a design tool for html/css. But internally, it doesn’t actually use html/css itself. All your designs live in a single <canvas> element.<p>We wanted users to be able to see all their content on one screen. We target +60fps, so frames only have 16ms to render. The browser’s layout engine couldn’t handle simple actions, like zooming, with many thousands of nodes on the screen. To fix that, we wrote a rendering engine in WebGL.<p>Since we use custom rendering, we had to create a lot of built-in browser behavior ourselves.<p>Users modify a large dom-like data-structure in our editor. Each node in the document has a set of css-like styles. We created a layout engine that turns this document into a list of positions and sizes. We feed these values into the rendering engine. Our layout engine lets us display proper html layouts without using the browser's layout engine. We support both flex and block layouts. We already support multiple layout units and properties (px, %, mins, maxes, etc.).<p>We also can’t use the browser’s built-in text editor, so we made one ourselves. We implemented all the common text editor features regarding selection and content editing (clicking, selection, hotkeys, inline styling, etc.). The state consists of content and selection. The inputs are keystrokes, clicks, and style changes. The updated state is used to render text, selection boxes, and the cursor.<p>When you publish a website, we turn our internal document into an html document. We've intentionally structured our document to feel similar to a dom tree. Each node has a 1:1 mapping with an html element. Nodes have a list of breakpoints which represent media-queries. We apply the styles by turning them into selectors. All of the html pages are saved and stored on our fileserver for hosting.<p>We made a playground for HN, so you can try it yourself. Now that the tech works, we’d love feedback and ideas for improving the product experience.<p>And we’d love to meet more builders interested in the space. If that’s you, feel free to say hello in the comments! Or you can reach Ben from his website.<p>Playground: <a href="https://app.repaint.com/playground">https://app.repaint.com/playground</a><p>Demo Vid: <a href="https://www.loom.com/share/03ee81317c224189bfa202d3eacfa3c3?sid=094a4888-5ca7-4c4f-ba57-bb24cffe169c" rel="nofollow">https://www.loom.com/share/03ee81317c224189bfa202d3eacfa3c3?...</a><p>Main Website: <a href="https://repaint.com/">https://repaint.com/</a><p>Contact: <a href="https://benshumaker.xyz/" rel="nofollow">https://benshumaker.xyz/</a>

Show HN: Open-Source Software for Designing 3D-Printable Luneburg Lenses for RF

Hi HN community,<p>I’m excited to share my project, LuneForge, an open-source tool currently in development that aims to simplify the design of Luneburg lenses specifically for radio frequency (RF) applications. Luneburg lenses are unique gradient-index lenses that focus RF signals effectively, making them valuable in various RF and antenna systems aimed for military and automotive industry.<p>Key Features:<p>Customizable Designs: Easily adjust parameters to tailor lens designs to specific RF needs.<p>User-Friendly Interface: Designed to be accessible for both RF professionals and hobbyists.<p>3D-Printing Optimization: Models are optimized for SLA 3D printing, ensuring precise and high-quality lenses.<p>Community-Driven: We’re building a community of RF enthusiasts and professionals to contribute, share knowledge, and push the boundaries of RF lens design.<p>I’d love to hear your feedback, suggestions, or ideas for new features. Feel free to check out the repository [ <a href="https://github.com/jboirazian/LuneForge">https://github.com/jboirazian/LuneForge</a> ]

Show HN: Open-Source Software for Designing 3D-Printable Luneburg Lenses for RF

Hi HN community,<p>I’m excited to share my project, LuneForge, an open-source tool currently in development that aims to simplify the design of Luneburg lenses specifically for radio frequency (RF) applications. Luneburg lenses are unique gradient-index lenses that focus RF signals effectively, making them valuable in various RF and antenna systems aimed for military and automotive industry.<p>Key Features:<p>Customizable Designs: Easily adjust parameters to tailor lens designs to specific RF needs.<p>User-Friendly Interface: Designed to be accessible for both RF professionals and hobbyists.<p>3D-Printing Optimization: Models are optimized for SLA 3D printing, ensuring precise and high-quality lenses.<p>Community-Driven: We’re building a community of RF enthusiasts and professionals to contribute, share knowledge, and push the boundaries of RF lens design.<p>I’d love to hear your feedback, suggestions, or ideas for new features. Feel free to check out the repository [ <a href="https://github.com/jboirazian/LuneForge">https://github.com/jboirazian/LuneForge</a> ]

Show HN: Full Text, Full Archive RSS Feeds for Any Blog

Show HN: Full Text, Full Archive RSS Feeds for Any Blog

Show HN: Full Text, Full Archive RSS Feeds for Any Blog

Show HN: A modern way to type in African languages

Hello HN, I'm pythonbrad and a core maintainer of Afrim - an input method engine for African languages.<p>Afrim want to simplify the typing in African languages and also digitalize the African typing systems. Basically, it wants to solve the problems encountered with current solutions: - slow typing - not easily configurable - keyboard layout dependent - constant bugs<p>Additionally, Afrim offers the following features [1]: - Dataset easily customizable - Keyboard layout independent - Auto completion, autocorrection and autosuggestion - Support all sequential codes<p>Technical details [2]: Afrim is written in Rust and his architecture is inspired of RIME.<p>What's next? - Offer an android frontend of the Afrim (in development) [3] - Support more African input methods as possible<p>I would like to have your opinions about this project. I have been working on it so far, and I would like to know how I can improve it.<p>-------------- [1] <a href="https://github.com/pythonbrad/afrim?tab=readme-ov-file#features">https://github.com/pythonbrad/afrim?tab=readme-ov-file#featu...</a> [2] <a href="https://pythonbrad.github.io/afrim-man/for_developers" rel="nofollow">https://pythonbrad.github.io/afrim-man/for_developers</a> [3] <a href="https://github.com/pythonbrad/afrim-keyboard/">https://github.com/pythonbrad/afrim-keyboard/</a>

Show HN: A modern way to type in African languages

Hello HN, I'm pythonbrad and a core maintainer of Afrim - an input method engine for African languages.<p>Afrim want to simplify the typing in African languages and also digitalize the African typing systems. Basically, it wants to solve the problems encountered with current solutions: - slow typing - not easily configurable - keyboard layout dependent - constant bugs<p>Additionally, Afrim offers the following features [1]: - Dataset easily customizable - Keyboard layout independent - Auto completion, autocorrection and autosuggestion - Support all sequential codes<p>Technical details [2]: Afrim is written in Rust and his architecture is inspired of RIME.<p>What's next? - Offer an android frontend of the Afrim (in development) [3] - Support more African input methods as possible<p>I would like to have your opinions about this project. I have been working on it so far, and I would like to know how I can improve it.<p>-------------- [1] <a href="https://github.com/pythonbrad/afrim?tab=readme-ov-file#features">https://github.com/pythonbrad/afrim?tab=readme-ov-file#featu...</a> [2] <a href="https://pythonbrad.github.io/afrim-man/for_developers" rel="nofollow">https://pythonbrad.github.io/afrim-man/for_developers</a> [3] <a href="https://github.com/pythonbrad/afrim-keyboard/">https://github.com/pythonbrad/afrim-keyboard/</a>

< 1 2 3 ... 39 40 41 42 43 ... 718 719 720 >