The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: I am building an open-source Confluence and Notion alternative

Hello HN,<p>I am building Docmost, an open-source collaborative wiki and documentation software. It is an open-source alternative to Confluence and Notion.<p>I have been working on it for the past 12 months. This is the first public release (beta).<p>The rich-text editor has support for real-time collaboration, LaTex, inline comments, tables, and callouts to name a few.<p>Features<p>- Collaborative real-time editor<p>- Spaces (Teamspace)<p>- User permissions<p>- Groups<p>- Comments<p>- Page history<p>- Nested pages<p>- Search<p>- File attachments<p>You can find screenshots of the product on the website.<p>Website: <a href="https://docmost.com" rel="nofollow">https://docmost.com</a><p>Github: <a href="https://github.com/docmost/docmost">https://github.com/docmost/docmost</a><p>Documentation: <a href="https://docmost.com/docs" rel="nofollow">https://docmost.com/docs</a><p>I would love to hear your feedback.<p>Thank you.

Show HN: I am building an open-source Confluence and Notion alternative

Hello HN,<p>I am building Docmost, an open-source collaborative wiki and documentation software. It is an open-source alternative to Confluence and Notion.<p>I have been working on it for the past 12 months. This is the first public release (beta).<p>The rich-text editor has support for real-time collaboration, LaTex, inline comments, tables, and callouts to name a few.<p>Features<p>- Collaborative real-time editor<p>- Spaces (Teamspace)<p>- User permissions<p>- Groups<p>- Comments<p>- Page history<p>- Nested pages<p>- Search<p>- File attachments<p>You can find screenshots of the product on the website.<p>Website: <a href="https://docmost.com" rel="nofollow">https://docmost.com</a><p>Github: <a href="https://github.com/docmost/docmost">https://github.com/docmost/docmost</a><p>Documentation: <a href="https://docmost.com/docs" rel="nofollow">https://docmost.com/docs</a><p>I would love to hear your feedback.<p>Thank you.

Show HN: I am building an open-source Confluence and Notion alternative

Hello HN,<p>I am building Docmost, an open-source collaborative wiki and documentation software. It is an open-source alternative to Confluence and Notion.<p>I have been working on it for the past 12 months. This is the first public release (beta).<p>The rich-text editor has support for real-time collaboration, LaTex, inline comments, tables, and callouts to name a few.<p>Features<p>- Collaborative real-time editor<p>- Spaces (Teamspace)<p>- User permissions<p>- Groups<p>- Comments<p>- Page history<p>- Nested pages<p>- Search<p>- File attachments<p>You can find screenshots of the product on the website.<p>Website: <a href="https://docmost.com" rel="nofollow">https://docmost.com</a><p>Github: <a href="https://github.com/docmost/docmost">https://github.com/docmost/docmost</a><p>Documentation: <a href="https://docmost.com/docs" rel="nofollow">https://docmost.com/docs</a><p>I would love to hear your feedback.<p>Thank you.

Show HN: Code to run Gemini (Nano) locally on desktop/Chrome

Chrome Canary (nightly build for devs) now has Gemini LLM inbuilt. This is just some simple code/demo to enable and use that feature.

Show HN: Semantic Search of 1000 Top Movies of All Time

Show HN: Focal, a Pomodoro App

Focal is a pomodoro web app. I know there's a lot of productivity apps already, but I'm releasing this since I figure someone might find it useful. Focal was more of a short, 3 day project so it may be rough around the edges, but I think it does the job. In any case, Focal is open source and I would be happy to merge contributions. Here's the github repository: <a href="https://github.com/aabiji/focal">https://github.com/aabiji/focal</a><p>Have a nice day!

Show HN: Focal, a Pomodoro App

Focal is a pomodoro web app. I know there's a lot of productivity apps already, but I'm releasing this since I figure someone might find it useful. Focal was more of a short, 3 day project so it may be rough around the edges, but I think it does the job. In any case, Focal is open source and I would be happy to merge contributions. Here's the github repository: <a href="https://github.com/aabiji/focal">https://github.com/aabiji/focal</a><p>Have a nice day!

Six Degrees of Reform UK – Mapping the UK's Entrepreneurial Far Right

Show HN: Gosax – A high-performance SAX XML parser for Go

I've just released gosax, a new Go library for high-performance SAX (Simple API for XML) parsing. It's designed for efficient, memory-conscious XML processing, drawing inspiration from quick-xml and pkg/json. <a href="https://github.com/orisano/gosax">https://github.com/orisano/gosax</a> Key features:<p>- Read-only SAX parsing - Highly efficient parsing using techniques inspired by quick-xml and pkg/json - SWAR (SIMD Within A Register) optimizations for fast text processing<p>gosax is particularly useful for processing large XML files or streams without loading the entire document into memory. It's well-suited for data feeds, large configuration files, or any scenario where XML parsing speed is crucial. I'd appreciate any feedback, especially from those working with large-scale XML processing in Go. What are your current pain points with XML parsing? How could gosax potentially help your projects?

Show HN: Gosax – A high-performance SAX XML parser for Go

I've just released gosax, a new Go library for high-performance SAX (Simple API for XML) parsing. It's designed for efficient, memory-conscious XML processing, drawing inspiration from quick-xml and pkg/json. <a href="https://github.com/orisano/gosax">https://github.com/orisano/gosax</a> Key features:<p>- Read-only SAX parsing - Highly efficient parsing using techniques inspired by quick-xml and pkg/json - SWAR (SIMD Within A Register) optimizations for fast text processing<p>gosax is particularly useful for processing large XML files or streams without loading the entire document into memory. It's well-suited for data feeds, large configuration files, or any scenario where XML parsing speed is crucial. I'd appreciate any feedback, especially from those working with large-scale XML processing in Go. What are your current pain points with XML parsing? How could gosax potentially help your projects?

Show HN: Gosax – A high-performance SAX XML parser for Go

I've just released gosax, a new Go library for high-performance SAX (Simple API for XML) parsing. It's designed for efficient, memory-conscious XML processing, drawing inspiration from quick-xml and pkg/json. <a href="https://github.com/orisano/gosax">https://github.com/orisano/gosax</a> Key features:<p>- Read-only SAX parsing - Highly efficient parsing using techniques inspired by quick-xml and pkg/json - SWAR (SIMD Within A Register) optimizations for fast text processing<p>gosax is particularly useful for processing large XML files or streams without loading the entire document into memory. It's well-suited for data feeds, large configuration files, or any scenario where XML parsing speed is crucial. I'd appreciate any feedback, especially from those working with large-scale XML processing in Go. What are your current pain points with XML parsing? How could gosax potentially help your projects?

Show HN: A Modern Palletization App

When searching on the internet for these type of apps, I didn't find many that were open-source AND easy to use. A lot of them had complicated interfaces, although they had loads of features as well.<p>So what I had in mind when making Stack Solver was an app with a modern interface that has the most essential features. It is also well integrated with Microsoft Excel and renders a 3D customizable drawing.<p>Stack Solver is programmed in C# using the WPF framework to ensure it is fast and light. The interface is built using WPF UI, a library that allows it to keep up with modern trends (specifically the Fluent design).<p>It is a work in progress with tons of new features planned and it is my first "serious" project so I would appreciate any feedback :)

Show HN: A Modern Palletization App

When searching on the internet for these type of apps, I didn't find many that were open-source AND easy to use. A lot of them had complicated interfaces, although they had loads of features as well.<p>So what I had in mind when making Stack Solver was an app with a modern interface that has the most essential features. It is also well integrated with Microsoft Excel and renders a 3D customizable drawing.<p>Stack Solver is programmed in C# using the WPF framework to ensure it is fast and light. The interface is built using WPF UI, a library that allows it to keep up with modern trends (specifically the Fluent design).<p>It is a work in progress with tons of new features planned and it is my first "serious" project so I would appreciate any feedback :)

Show HN: A Modern Palletization App

When searching on the internet for these type of apps, I didn't find many that were open-source AND easy to use. A lot of them had complicated interfaces, although they had loads of features as well.<p>So what I had in mind when making Stack Solver was an app with a modern interface that has the most essential features. It is also well integrated with Microsoft Excel and renders a 3D customizable drawing.<p>Stack Solver is programmed in C# using the WPF framework to ensure it is fast and light. The interface is built using WPF UI, a library that allows it to keep up with modern trends (specifically the Fluent design).<p>It is a work in progress with tons of new features planned and it is my first "serious" project so I would appreciate any feedback :)

Show HN: A Modern Palletization App

When searching on the internet for these type of apps, I didn't find many that were open-source AND easy to use. A lot of them had complicated interfaces, although they had loads of features as well.<p>So what I had in mind when making Stack Solver was an app with a modern interface that has the most essential features. It is also well integrated with Microsoft Excel and renders a 3D customizable drawing.<p>Stack Solver is programmed in C# using the WPF framework to ensure it is fast and light. The interface is built using WPF UI, a library that allows it to keep up with modern trends (specifically the Fluent design).<p>It is a work in progress with tons of new features planned and it is my first "serious" project so I would appreciate any feedback :)

Show HN: Dorkly – Open source feature flags

Dorkly is a free open source Feature Flag backend for LaunchDarkly SDKs. It uses simple yaml files stored in GitHub as the source of truth.<p>Full disclosure: made by a former LaunchDarkly employee + current fan.

Show HN: Dorkly – Open source feature flags

Dorkly is a free open source Feature Flag backend for LaunchDarkly SDKs. It uses simple yaml files stored in GitHub as the source of truth.<p>Full disclosure: made by a former LaunchDarkly employee + current fan.

Show HN: Voice bots with 500ms response times

Last year when GPT-4 was released I started making lots of little voice + LLM experiments. Voice interfaces are fun; there are several interesting new problem spaces to explore.<p>I'm convinced that voice is going to be a bigger and bigger part of how we all interact with generative AI. But one thing that's hard, today, is building voice bots that respond as quickly as humans do in conversation. A 500ms voice-to-voice response time is just <i>barely</i> possible with today's AI models.<p>You can get down to 500ms if you: host transcription, LLM inference, and voice generation all together in one place; are careful about how you route and pipeline all the data; and the gods of both wifi and vram caching smile on you.<p>Here's a demo of a 500ms-capable voice bot, plus a container you can deploy to run it yourself on an A10/A100/H100 if you want to:<p><a href="https://fastvoiceagent.cerebrium.ai/">https://fastvoiceagent.cerebrium.ai/</a><p>We've been collecting lots of metrics. Here are typical numbers (in milliseconds) for all the easily measurable parts of the voice-to-voice response cycle.<p><pre><code> macOS mic input 40 opus encoding 30 network stack and transit 10 packet handling 2 jitter buffer 40 opus decoding 30 transcription and endpointing 200 llm ttfb 100 sentence aggregation 100 tts ttfb 80 opus encoding 30 packet handling 2 network stack and transit 10 jitter buffer 40 opus decoding 30 macOS speaker output 15 ---------------------------------- total ms 759 </code></pre> Everything in AI is changing all the time. LLMs with native audio input and output capabilities will likely make it easier to build fast-responding voice bots soon. But for the moment, I think this is the fastest possible approach/tech stack.

Show HN: Voice bots with 500ms response times

Last year when GPT-4 was released I started making lots of little voice + LLM experiments. Voice interfaces are fun; there are several interesting new problem spaces to explore.<p>I'm convinced that voice is going to be a bigger and bigger part of how we all interact with generative AI. But one thing that's hard, today, is building voice bots that respond as quickly as humans do in conversation. A 500ms voice-to-voice response time is just <i>barely</i> possible with today's AI models.<p>You can get down to 500ms if you: host transcription, LLM inference, and voice generation all together in one place; are careful about how you route and pipeline all the data; and the gods of both wifi and vram caching smile on you.<p>Here's a demo of a 500ms-capable voice bot, plus a container you can deploy to run it yourself on an A10/A100/H100 if you want to:<p><a href="https://fastvoiceagent.cerebrium.ai/">https://fastvoiceagent.cerebrium.ai/</a><p>We've been collecting lots of metrics. Here are typical numbers (in milliseconds) for all the easily measurable parts of the voice-to-voice response cycle.<p><pre><code> macOS mic input 40 opus encoding 30 network stack and transit 10 packet handling 2 jitter buffer 40 opus decoding 30 transcription and endpointing 200 llm ttfb 100 sentence aggregation 100 tts ttfb 80 opus encoding 30 packet handling 2 network stack and transit 10 jitter buffer 40 opus decoding 30 macOS speaker output 15 ---------------------------------- total ms 759 </code></pre> Everything in AI is changing all the time. LLMs with native audio input and output capabilities will likely make it easier to build fast-responding voice bots soon. But for the moment, I think this is the fastest possible approach/tech stack.

Show HN: Voice bots with 500ms response times

Last year when GPT-4 was released I started making lots of little voice + LLM experiments. Voice interfaces are fun; there are several interesting new problem spaces to explore.<p>I'm convinced that voice is going to be a bigger and bigger part of how we all interact with generative AI. But one thing that's hard, today, is building voice bots that respond as quickly as humans do in conversation. A 500ms voice-to-voice response time is just <i>barely</i> possible with today's AI models.<p>You can get down to 500ms if you: host transcription, LLM inference, and voice generation all together in one place; are careful about how you route and pipeline all the data; and the gods of both wifi and vram caching smile on you.<p>Here's a demo of a 500ms-capable voice bot, plus a container you can deploy to run it yourself on an A10/A100/H100 if you want to:<p><a href="https://fastvoiceagent.cerebrium.ai/">https://fastvoiceagent.cerebrium.ai/</a><p>We've been collecting lots of metrics. Here are typical numbers (in milliseconds) for all the easily measurable parts of the voice-to-voice response cycle.<p><pre><code> macOS mic input 40 opus encoding 30 network stack and transit 10 packet handling 2 jitter buffer 40 opus decoding 30 transcription and endpointing 200 llm ttfb 100 sentence aggregation 100 tts ttfb 80 opus encoding 30 packet handling 2 network stack and transit 10 jitter buffer 40 opus decoding 30 macOS speaker output 15 ---------------------------------- total ms 759 </code></pre> Everything in AI is changing all the time. LLMs with native audio input and output capabilities will likely make it easier to build fast-responding voice bots soon. But for the moment, I think this is the fastest possible approach/tech stack.

< 1 2 3 ... 184 185 186 187 188 ... 830 831 832 >