The best Hacker News stories from Show from the past day
Latest posts:
Show HN: Ghidra Plays Mario
I've been exploring new ways of testing Ghidra processor modules. In this repo, I was able to emulate NES ROMs in Ghidra to test its 6502 specification, which resulted in finding and fixing some bugs.<p>Context: Ghidra is used for reverse engineering binary executables, complementing the usual disassembly view with function decompilation. Each supported architecture has a SLEIGH specification, which provides semantics for parsing and emulating instructions, not unlike the dispatch handlers you would find in interpreters written for console emulators.<p>Emulator devs have long had extensive test ROMs for popular consoles, but Ghidra only provides CPU emulation, so it can't run them without additional setup. What I did here is bridge the gap: by modifying a console emulator to instead delegate CPU execution to Ghidra, we can now use these same ROMs to validate Ghidra processor modules.<p>Previously [1], I went with a trace log diffing approach, where any hardware specific behaviour that affected CPU execution was also encoded in trace logs. However, it required writing hardware specific logic, and is still not complete. With the delegation approach, most of this effort is avoided, since it's easier to hook and delegate memory accesses.<p>I plan on continuing research in this space and generalizing my approaches, since it shows potencial for complementing existing test coverage provided by pcodetest. If a simple architecture like 6502 had a few bugs, who knows how many are in more complex architectures! I wasn't able to find similar attempts (outside of diffing and coverage analysis from trace logs), please let me know if I missed something, and any suggestions for improvements.<p>[1]: <a href="https://github.com/nevesnunes/ghidra-tlcs900h#emulation">https://github.com/nevesnunes/ghidra-tlcs900h#emulation</a>
Show HN: Real-Time 3D Gaussian Splatting in WebGL
Show HN: Real-Time 3D Gaussian Splatting in WebGL
Show HN: Real-Time 3D Gaussian Splatting in WebGL
Show HN: Real-Time 3D Gaussian Splatting in WebGL
Show HN: Productonboarding.com – Mobbin for SaaS product onboarding
Hey Hackernews, Eric here!<p>Wanted to share a new website we just built called productonboarding.com (Next.js and RSC). The site has screenshots and videos of product onboarding from companies like Figma, Notion, Framer, and more. It’s sort of like Mobbin for web-based product onboarding.<p>We build a lot of product onboarding at our startup Frigade, and over the last year we’ve put together an internal library of hundreds of product onboarding examples that we refer to all the time with customers. It helps them find and copy patterns that work at other companies so they don’t need to create net new experiences or A/B test their way into the best performing pattern from scratch.<p>Given it's been so useful to us, we decided to open it up to the world. We bought productonboarding.com and have started adding examples from our collection and made them browsable and sortable. We’re planning to add new examples weekly.<p>Hope this is helpful to any of you who are currently thinking through building new onboarding experiences. Would love any ideas and feedback. Thanks!
Show HN: World Engine – Build Worlds Like Brandon Sanderson
Hey HN,<p>After hours of re-watching Brandon’s creative writing BYU lectures[1], we converted his ideas into a framework to rapidly iterate and build sci-fi fantasy worlds.<p>The app is divided into 3 sections:
- World: To create a unique magic system intertwined with distinct physical and cultural phenomena
- Characters: To create character arcs and give them unique traits that evolve over time
- Plot: To weave a series of events in this world following the character arc<p>We are newbie devs, any suggestions on the app improvements and usability would be great!<p>The idea is to capture the essence of Brandon's approach. We can't help but wonder if other 'earned insights' can be converted into similar applications for different use cases outside of fiction.<p>[1] <a href="https://www.youtube.com/watch?v=0cf-qdZ7GbA&list=PLSH_xM-KC3Zv-79sVZTTj-YA6IAqh8qeQ">https://www.youtube.com/watch?v=0cf-qdZ7GbA&list=PLSH_xM-KC3...</a>
Show HN: Erlmacs – a script to update your .emacs file for Erlang development
erlmacs automatically configures and updates your .emacs file with support for the emacs mode that is included with Erlang/OTP. It frees you from having to locate the installation directory of Erlang/OTP and its bundled emacs mode.<p>It is an escript that only depends upon Erlang/OTP and Emacs.<p>Note: There is not much in the way of error checking at this moment, but it does make a backup of your .emacs files before any destructive operations.
New Bézier curves for vector graphics
Show HN: TaleBot – AI-Generated Personalized Bedtime Stories for Kids
Hello HN, I'm excited to share a project my co-founder and I have been working on - TaleBot.<p>What It Does:
TaleBot creates a personalized bedtime story based on parent/caregiver's inputs. You can customize the theme, the challenges (kid-friendly obstacles) the characters face, and even the moral lesson it teaches. Once the story is ready, you'll receive a PDF file, and an AI narrated voiceover of it.<p>Try It Out:
We've made it as barrier-free as possible for you to test. Use the promo code HNFREETALE on our story creation form, and you can get a story written for free, no sign-ups required.<p>Looking forward to your feedback. We value your thoughts and input on our product and idea.
Show HN: Synthical – Like HN, but for Science
Show HN: Synthical – Like HN, but for Science
Show HN: Synthical – Like HN, but for Science
Show HN: WhatsApp-Llama: A clone of yourself from your WhatsApp conversations
Hello HN!<p>I've been thinking about the idea of a LLM thats a clone of me - instead of generating replies to be a helpful assistant, it generates replies that are exactly like mine. The concept's appeared in fiction numerous times (the talking paintings in Harry Potter that mimic the person painted, the clones in The Prestige), and I think with LLMs, there might actually be a possibility of us doing something like this!<p>I've just released a fork of the facebookresearch/llama-recipes which allows you to fine-tune a Llama model on your personal WhatsApp conversations. This adaptation can train the model (using QLoRA) to respond in a way that's eerily similar to your own texting style.<p>What I've figured out so far:<p>Quick Learning: The model quickly adapts to personal nuances, emoji usage, and phrases that you use. I've trained just 1 epoch on a P100 GPU using QLoRA and 4 bit quantization, and its already captured my mannerisms<p>Turing Tests: As an experiment, I asked my friends to ask me 3 questions, and responded with 2 candidate responses (one from me and one from llama). My friends then had to guess which candidate response was mine and which one was Llama's. Llama managed to fool 10% of my friends, but with more compute, I think it can do way better.<p>Here's the GitHub repository: <a href="https://github.com/Ads-cmu/WhatsApp-Llama/">https://github.com/Ads-cmu/WhatsApp-Llama/</a><p>Would love to hear feedback, suggestions, and any cool experiences if you decide to give it a try! I'd love to see how far we can push this by training bigger models for more epochs (I ran out of compute credits)
Show HN: WhatsApp-Llama: A clone of yourself from your WhatsApp conversations
Hello HN!<p>I've been thinking about the idea of a LLM thats a clone of me - instead of generating replies to be a helpful assistant, it generates replies that are exactly like mine. The concept's appeared in fiction numerous times (the talking paintings in Harry Potter that mimic the person painted, the clones in The Prestige), and I think with LLMs, there might actually be a possibility of us doing something like this!<p>I've just released a fork of the facebookresearch/llama-recipes which allows you to fine-tune a Llama model on your personal WhatsApp conversations. This adaptation can train the model (using QLoRA) to respond in a way that's eerily similar to your own texting style.<p>What I've figured out so far:<p>Quick Learning: The model quickly adapts to personal nuances, emoji usage, and phrases that you use. I've trained just 1 epoch on a P100 GPU using QLoRA and 4 bit quantization, and its already captured my mannerisms<p>Turing Tests: As an experiment, I asked my friends to ask me 3 questions, and responded with 2 candidate responses (one from me and one from llama). My friends then had to guess which candidate response was mine and which one was Llama's. Llama managed to fool 10% of my friends, but with more compute, I think it can do way better.<p>Here's the GitHub repository: <a href="https://github.com/Ads-cmu/WhatsApp-Llama/">https://github.com/Ads-cmu/WhatsApp-Llama/</a><p>Would love to hear feedback, suggestions, and any cool experiences if you decide to give it a try! I'd love to see how far we can push this by training bigger models for more epochs (I ran out of compute credits)
Show HN: WhatsApp-Llama: A clone of yourself from your WhatsApp conversations
Hello HN!<p>I've been thinking about the idea of a LLM thats a clone of me - instead of generating replies to be a helpful assistant, it generates replies that are exactly like mine. The concept's appeared in fiction numerous times (the talking paintings in Harry Potter that mimic the person painted, the clones in The Prestige), and I think with LLMs, there might actually be a possibility of us doing something like this!<p>I've just released a fork of the facebookresearch/llama-recipes which allows you to fine-tune a Llama model on your personal WhatsApp conversations. This adaptation can train the model (using QLoRA) to respond in a way that's eerily similar to your own texting style.<p>What I've figured out so far:<p>Quick Learning: The model quickly adapts to personal nuances, emoji usage, and phrases that you use. I've trained just 1 epoch on a P100 GPU using QLoRA and 4 bit quantization, and its already captured my mannerisms<p>Turing Tests: As an experiment, I asked my friends to ask me 3 questions, and responded with 2 candidate responses (one from me and one from llama). My friends then had to guess which candidate response was mine and which one was Llama's. Llama managed to fool 10% of my friends, but with more compute, I think it can do way better.<p>Here's the GitHub repository: <a href="https://github.com/Ads-cmu/WhatsApp-Llama/">https://github.com/Ads-cmu/WhatsApp-Llama/</a><p>Would love to hear feedback, suggestions, and any cool experiences if you decide to give it a try! I'd love to see how far we can push this by training bigger models for more epochs (I ran out of compute credits)
Show HN: I built a Python web framework
been working on this for nearly a year
Show HN: I built a Python web framework
been working on this for nearly a year
Show HN: I built a Python web framework
been working on this for nearly a year
Show HN: Find jobs at top AI startups
Hello HN, I am one of the creators of WorkInAI, and I'm excited to share our project with the community and gather valuable feedback.<p>WorkInAI is a job aggregation platform for positions at leading AI startups. We have compiled over 350 job listings from more than 20 top AI startups, including companies like OpenAI, Anthropic, Cohere, and more. We created this platform in response to a friend's frustration with trying to find suitable AI startup roles in London. He used to check various company career pages frequently to see if any new opportunities had arisen -- so we built this to aggregate jobs in a single place.<p>We're launching this MVP early to gather feedback, whether it's feature requests or suggestions for adding new startups to our list. We value your thoughts and input on our product and idea.<p>Thanks!