The best Hacker News stories from All from the past day

Go back

Latest posts:

The "email is authentication" pattern

WebP: The WebPage Compression Format

WebP: The WebPage Compression Format

Malaysia started mandating ISPs to redirect DNS queries to local servers

Keyhole – Forge own Windows Store licenses

Keyhole – Forge own Windows Store licenses

What happens when you touch a pickle to an AM radio tower

Swift is a more convenient Rust

Mapping 20k ships that sank during WW II

Mapping 20k ships that sank during WW II

Effects of Gen AI on High Skilled Work: Experiments with Software Developers

Show HN: Infinity – Realistic AI characters that can speak

Hey HN, this is Lina, Andrew, and Sidney from Infinity AI (<a href="https://infinity.ai/">https://infinity.ai/</a>). We've trained our own foundation video model focused on people. As far as we know, this is the first time someone has trained a video diffusion transformer that’s driven by audio input. This is cool because it allows for expressive, realistic-looking characters that actually speak. Here’s a blog with a bunch of examples: <a href="https://toinfinityai.github.io/v2-launch-page/" rel="nofollow">https://toinfinityai.github.io/v2-launch-page/</a><p>If you want to try it out, you can either (1) go to <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a>, or (2) post a comment in this thread describing a character and we’ll generate a video for you and reply with a link. For example: “Mona Lisa saying ‘what the heck are you smiling at?’”: <a href="https://bit.ly/3z8l1TM" rel="nofollow">https://bit.ly/3z8l1TM</a> “A 3D pixar-style gnome with a pointy red hat reciting the Declaration of Independence”: <a href="https://bit.ly/3XzpTdS" rel="nofollow">https://bit.ly/3XzpTdS</a> “Elon Musk singing Fly Me To The Moon by Sinatra”: <a href="https://bit.ly/47jyC7C" rel="nofollow">https://bit.ly/47jyC7C</a><p>Our tool at Infinity allows creators to type out a script with what they want their characters to say (and eventually, what they want their characters to do) and get a video out. We’ve trained for about 11 GPU years (~$500k) so far and our model recently started getting good results, so we wanted to share it here. We are still actively training.<p>We had trouble creating videos of good characters with existing AI tools. Generative AI video models (like Runway and Luma) don’t allow characters to speak. And talking avatar companies (like HeyGen and Synthesia) just do lip syncing on top of the previously recorded videos. This means you often get facial expressions and gestures that don’t make sense with the audio, resulting in the “uncanny” look you can’t quite put your finger on. See blog.<p>When we started Infinity, our V1 model took the lip syncing approach. In addition to mismatched gestures, this method had many limitations, including a finite library of actors (we had to fine-tune a model for each one with existing video footage) and an inability to animate imaginary characters.<p>To address these limitations in V2, we decided to train an end-to-end video diffusion transformer model that takes in a single image, audio, and other conditioning signals and outputs video. We believe this end-to-end approach is the best way to capture the full complexity and nuances of human motion and emotion. One drawback of our approach is that the model is slow despite using rectified flow (2-4x speed up) and a 3D VAE embedding layer (2-5x speed up).<p>Here are a few things the model does surprisingly well on: (1) it can handle multiple languages, (2) it has learned some physics (e.g. it generates earrings that dangle properly and infers a matching pair on the other ear), (3) it can animate diverse types of images (paintings, sculptures, etc) despite not being trained on those, and (4) it can handle singing. See blog.<p>Here are some failure modes of the model: (1) it cannot handle animals (only humanoid images), (2) it often inserts hands into the frame (very annoying and distracting), (3) it’s not robust on cartoons, and (4) it can distort people’s identities (noticeable on well-known figures). See blog.<p>Try the model here: <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a><p>We’d love to hear what you think!

Show HN: Infinity – Realistic AI characters that can speak

Hey HN, this is Lina, Andrew, and Sidney from Infinity AI (<a href="https://infinity.ai/">https://infinity.ai/</a>). We've trained our own foundation video model focused on people. As far as we know, this is the first time someone has trained a video diffusion transformer that’s driven by audio input. This is cool because it allows for expressive, realistic-looking characters that actually speak. Here’s a blog with a bunch of examples: <a href="https://toinfinityai.github.io/v2-launch-page/" rel="nofollow">https://toinfinityai.github.io/v2-launch-page/</a><p>If you want to try it out, you can either (1) go to <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a>, or (2) post a comment in this thread describing a character and we’ll generate a video for you and reply with a link. For example: “Mona Lisa saying ‘what the heck are you smiling at?’”: <a href="https://bit.ly/3z8l1TM" rel="nofollow">https://bit.ly/3z8l1TM</a> “A 3D pixar-style gnome with a pointy red hat reciting the Declaration of Independence”: <a href="https://bit.ly/3XzpTdS" rel="nofollow">https://bit.ly/3XzpTdS</a> “Elon Musk singing Fly Me To The Moon by Sinatra”: <a href="https://bit.ly/47jyC7C" rel="nofollow">https://bit.ly/47jyC7C</a><p>Our tool at Infinity allows creators to type out a script with what they want their characters to say (and eventually, what they want their characters to do) and get a video out. We’ve trained for about 11 GPU years (~$500k) so far and our model recently started getting good results, so we wanted to share it here. We are still actively training.<p>We had trouble creating videos of good characters with existing AI tools. Generative AI video models (like Runway and Luma) don’t allow characters to speak. And talking avatar companies (like HeyGen and Synthesia) just do lip syncing on top of the previously recorded videos. This means you often get facial expressions and gestures that don’t make sense with the audio, resulting in the “uncanny” look you can’t quite put your finger on. See blog.<p>When we started Infinity, our V1 model took the lip syncing approach. In addition to mismatched gestures, this method had many limitations, including a finite library of actors (we had to fine-tune a model for each one with existing video footage) and an inability to animate imaginary characters.<p>To address these limitations in V2, we decided to train an end-to-end video diffusion transformer model that takes in a single image, audio, and other conditioning signals and outputs video. We believe this end-to-end approach is the best way to capture the full complexity and nuances of human motion and emotion. One drawback of our approach is that the model is slow despite using rectified flow (2-4x speed up) and a 3D VAE embedding layer (2-5x speed up).<p>Here are a few things the model does surprisingly well on: (1) it can handle multiple languages, (2) it has learned some physics (e.g. it generates earrings that dangle properly and infers a matching pair on the other ear), (3) it can animate diverse types of images (paintings, sculptures, etc) despite not being trained on those, and (4) it can handle singing. See blog.<p>Here are some failure modes of the model: (1) it cannot handle animals (only humanoid images), (2) it often inserts hands into the frame (very annoying and distracting), (3) it’s not robust on cartoons, and (4) it can distort people’s identities (noticeable on well-known figures). See blog.<p>Try the model here: <a href="https://studio.infinity.ai/try-inf2">https://studio.infinity.ai/try-inf2</a><p>We’d love to hear what you think!

Did Sandia use a thermonuclear secondary in a product logo?

2M users but no money in the bank

2M users but no money in the bank

Tell HN: Burnout is bad to your brain, take care

I am depressed and burned out for quite some time already, unfortunately my brain still couldn't recover from it.<p>If I summarize the impact of burnout to my brain:<p>- Before: I could learn things pretty quickly, come up with solutions to the problems, even be able to see common patterns and see bigger underlying problems<p>- After: can't learn, can't work, can't remember, can't see solutions for trivial problems (e.g. if your shirt is wet, you can change it, but I stare at it thinking when it is going to get dried up)<p>Take care of your mental health

Show HN: Wealthfolio: Private, open-source investment tracker

Thank you for your comments, just some context:<p>- The app is a simple desktop application that works on macOS, Windows, and Ubuntu.<p>- I developed this app for my own needs. Getting tired of SaaS app subscriptions and privacy concerns.<p>- For now, the activities are logged manually or imported from a CSV file. No integration with Plaid or other platforms.<p>- No monetization is planned for now (only a "buy me a coffee" if you use and appreciate the app).

Show HN: Wealthfolio: Private, open-source investment tracker

Thank you for your comments, just some context:<p>- The app is a simple desktop application that works on macOS, Windows, and Ubuntu.<p>- I developed this app for my own needs. Getting tired of SaaS app subscriptions and privacy concerns.<p>- For now, the activities are logged manually or imported from a CSV file. No integration with Plaid or other platforms.<p>- No monetization is planned for now (only a "buy me a coffee" if you use and appreciate the app).

Common food dye found to make skin and muscle temporarily transparent

< 1 2 3 ... 37 38 39 40 41 ... 712 713 714 >