The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Web search using a ChatGPT-like model that can cite its sources

We’ve trained a generative AI model to browse the web and answer questions/retrieve code snippets directly. Unlike ChatGPT, it has access to primary sources and is able to cite them when you hover over an answer (click on the text to go to the source being cited). We also show regular Bing results side-by-side with our AI answer.<p>The model is an 11-billion parameter T5-derivative that has been fine-tuned on feedback given on hundreds of thousands of searches done (anonymously) on our platform. Giving the model web access lessens its burden to need to store a snapshot of human knowledge within its parameters. Rather, it knows how to piece together primary sources in a natural and informative way. Using our own model is also an order of magnitude cheaper than relying on GPT.<p>A drawback to aligning models to web results is that they are less inclined to generate complete solutions/answers to questions where good primary sources don’t exist. Answers generated without underlying citable sources can be more creative but are prone to errors. In the future, we will show both types of answers.<p>Examples:<p><a href="https://beta.sayhello.so/search?q=set+cookie+in+fastapi" rel="nofollow">https://beta.sayhello.so/search?q=set+cookie+in+fastapi</a><p><a href="https://beta.sayhello.so/search?q=What+did+Paul+Graham+learn+from+users" rel="nofollow">https://beta.sayhello.so/search?q=What+did+Paul+Graham+learn...</a><p><a href="https://beta.sayhello.so/search?q=How+to+get+command+line+parameters+in+Rust" rel="nofollow">https://beta.sayhello.so/search?q=How+to+get+command+line+pa...</a><p><a href="https://beta.sayhello.so/search?q=why+did+Elon+Musk+buy+twitter" rel="nofollow">https://beta.sayhello.so/search?q=why+did+Elon+Musk+buy+twit...</a><p>Would love to hear your thoughts.

Show HN: Web search using a ChatGPT-like model that can cite its sources

We’ve trained a generative AI model to browse the web and answer questions/retrieve code snippets directly. Unlike ChatGPT, it has access to primary sources and is able to cite them when you hover over an answer (click on the text to go to the source being cited). We also show regular Bing results side-by-side with our AI answer.<p>The model is an 11-billion parameter T5-derivative that has been fine-tuned on feedback given on hundreds of thousands of searches done (anonymously) on our platform. Giving the model web access lessens its burden to need to store a snapshot of human knowledge within its parameters. Rather, it knows how to piece together primary sources in a natural and informative way. Using our own model is also an order of magnitude cheaper than relying on GPT.<p>A drawback to aligning models to web results is that they are less inclined to generate complete solutions/answers to questions where good primary sources don’t exist. Answers generated without underlying citable sources can be more creative but are prone to errors. In the future, we will show both types of answers.<p>Examples:<p><a href="https://beta.sayhello.so/search?q=set+cookie+in+fastapi" rel="nofollow">https://beta.sayhello.so/search?q=set+cookie+in+fastapi</a><p><a href="https://beta.sayhello.so/search?q=What+did+Paul+Graham+learn+from+users" rel="nofollow">https://beta.sayhello.so/search?q=What+did+Paul+Graham+learn...</a><p><a href="https://beta.sayhello.so/search?q=How+to+get+command+line+parameters+in+Rust" rel="nofollow">https://beta.sayhello.so/search?q=How+to+get+command+line+pa...</a><p><a href="https://beta.sayhello.so/search?q=why+did+Elon+Musk+buy+twitter" rel="nofollow">https://beta.sayhello.so/search?q=why+did+Elon+Musk+buy+twit...</a><p>Would love to hear your thoughts.

Show HN: Tenebra game PC port of popular Commodore 64 game

Guide the hapless protagonist to the exit, while keeping in mind that he is afraid of darkness and refuses to walk in the dark areas.

Show HN: TromPhone, a Trombone for Your Phone

A few months ago I had a silly idea of making a mobile app that used the accelerometer to track the slide motion for playing a virtual trombone. Just wanted to share the story of bringing it to fruition here on hn.<p>I started out spending a couple days trying to get something cross-platform going in Flutter, but it soon became clear that wasn't the best fit, seeing as I'd need native hooks for most of what the app needed to do, and it wasn't yet clear it'd be possible at all. So I switched to making it an iOS app in Swift. The accelerometer data turned out to be not nearly accurate enough to do the job, so I switched to using the camera/AR using ARKit... and it worked instantly. Like the very first time I hooked up a slider UI element to the distance function. It felt a bit like magic. And also just ridiculous.<p>Here's a video I recorded to send to some friends at the time: <a href="https://youtu.be/6BIogfGH3IQ" rel="nofollow">https://youtu.be/6BIogfGH3IQ</a><p>Here's a video of it in action in it's current state: <a href="https://youtube.com/shorts/8kS2TRzV4I4?feature=share" rel="nofollow">https://youtube.com/shorts/8kS2TRzV4I4?feature=share</a><p>Apologies for the non-responsive website (using nextjs on cloudflare). It doesn't look great on mobile, which is kind of inexcusable, I'm working on it: <a href="https://www.tromphoneapp.com" rel="nofollow">https://www.tromphoneapp.com</a><p>Anyway, I'm not sure where I'll take it from here. I have some ideas for more AR content like hats, heart eyes, etc. Possibly a song editor so users can add songs that might have issues with copyright if I included them in the app? Any ideas you guys have would be fun to hear.

Show HN: Publish from GitHub Actions using multi-factor authentication

The backstory about this GitHub Action:<p>I discussed with an open-source maintainer why they publish npm packages from their local machine and do not use CI/CD pipelines.<p>They said publishing should require human intervention and want to continue using multi-factor authentication to publish to the npm registry.<p>This led to building the wait-for-secrets GitHub Action. It prints a URL in the build log and waits for secrets to be entered using a browser. Once entered, the workflow continues, and secrets can be used in future steps.<p>The latest release of "eslint-plugin-react" to the npm registry used a one-time password (OTP) from a GitHub Actions workflow! <a href="https://github.com/jsx-eslint/eslint-plugin-react/actions/runs/3498968497/jobs/5860126389#step:9:1" rel="nofollow">https://github.com/jsx-eslint/eslint-plugin-react/actions/ru...</a>

Show HN: Port of OpenAI's Whisper model in C/C++

Hi HN,<p>OpenAI recently released a model for automatic speech recognition called Whisper [0]. I decided to reimplement the inference of the model from scratch using C/C++. To achieve this I implemented a minimalistic tensor library in C and ported the high-level architecture of the model in C++. The entire code is less than 8000 lines of code and is contained in just 2 source files without any third-party dependencies. The Github project is here:<p><a href="https://github.com/ggerganov/whisper.cpp" rel="nofollow">https://github.com/ggerganov/whisper.cpp</a><p>With this implementation I can very easily build and run the model - <i>“make base.en”</i>. It also allows me to run it on a wide range of devices. For example, I have provided examples of running the model on an iPhone, Raspberry Pi 4 and even in a web page via WebAssembly!<p>The implementation runs fully on the CPU and utilizes FP16, AVX intrinsics on x86 architectures and NEON + Accelerate framework on Apple Silicon. The latter is especially efficient and I observe that the inference is about 2-3 times faster compared to the current PyTorch implementation provided by OpenAI when running it on my MacBook M1 Pro. The WASM port utilizes SIMD 128-bit intrinsics - a feature supported in some modern web browsers [1].<p>I am very happy with the performance that I observe on Apple Silicon devices. I didn’t expect that the Accelerate framework [2] (i.e. CBLAS) offers such a dramatic performance boost for matrix multiplications so I was very pleasantly surprised! To enable the framework in your C/C++ projects, all you have to do is add <i>`-framework Accelerate`</i> to your clang command-line flags.<p>This entire exercise of implementing the Whisper model was very interesting to me and helped me understand a lot about how the transformer architecture works. I also got a lot of positive feedback from people finding and using my project. We brainstormed on a lot of interesting tools that can potentially be created with this library (such as speech-to-text plugin for Vim, RPi4 voice assistant, WASM chat bot, etc). If interested, checkout the “Examples” section and the “Show and tell” discussions for some ideas!<p>Would love to know what you think about this project and about your experience with using the Accelerate framework in any of your projects. Cheers!<p>[0] <a href="https://github.com/openai/whisper" rel="nofollow">https://github.com/openai/whisper</a><p>[1] <a href="https://chromestatus.com/feature/6533147810332672" rel="nofollow">https://chromestatus.com/feature/6533147810332672</a><p>[2] <a href="https://developer.apple.com/documentation/accelerate" rel="nofollow">https://developer.apple.com/documentation/accelerate</a>

Show HN: Port of OpenAI's Whisper model in C/C++

Hi HN,<p>OpenAI recently released a model for automatic speech recognition called Whisper [0]. I decided to reimplement the inference of the model from scratch using C/C++. To achieve this I implemented a minimalistic tensor library in C and ported the high-level architecture of the model in C++. The entire code is less than 8000 lines of code and is contained in just 2 source files without any third-party dependencies. The Github project is here:<p><a href="https://github.com/ggerganov/whisper.cpp" rel="nofollow">https://github.com/ggerganov/whisper.cpp</a><p>With this implementation I can very easily build and run the model - <i>“make base.en”</i>. It also allows me to run it on a wide range of devices. For example, I have provided examples of running the model on an iPhone, Raspberry Pi 4 and even in a web page via WebAssembly!<p>The implementation runs fully on the CPU and utilizes FP16, AVX intrinsics on x86 architectures and NEON + Accelerate framework on Apple Silicon. The latter is especially efficient and I observe that the inference is about 2-3 times faster compared to the current PyTorch implementation provided by OpenAI when running it on my MacBook M1 Pro. The WASM port utilizes SIMD 128-bit intrinsics - a feature supported in some modern web browsers [1].<p>I am very happy with the performance that I observe on Apple Silicon devices. I didn’t expect that the Accelerate framework [2] (i.e. CBLAS) offers such a dramatic performance boost for matrix multiplications so I was very pleasantly surprised! To enable the framework in your C/C++ projects, all you have to do is add <i>`-framework Accelerate`</i> to your clang command-line flags.<p>This entire exercise of implementing the Whisper model was very interesting to me and helped me understand a lot about how the transformer architecture works. I also got a lot of positive feedback from people finding and using my project. We brainstormed on a lot of interesting tools that can potentially be created with this library (such as speech-to-text plugin for Vim, RPi4 voice assistant, WASM chat bot, etc). If interested, checkout the “Examples” section and the “Show and tell” discussions for some ideas!<p>Would love to know what you think about this project and about your experience with using the Accelerate framework in any of your projects. Cheers!<p>[0] <a href="https://github.com/openai/whisper" rel="nofollow">https://github.com/openai/whisper</a><p>[1] <a href="https://chromestatus.com/feature/6533147810332672" rel="nofollow">https://chromestatus.com/feature/6533147810332672</a><p>[2] <a href="https://developer.apple.com/documentation/accelerate" rel="nofollow">https://developer.apple.com/documentation/accelerate</a>

Show HN: Port of OpenAI's Whisper model in C/C++

Hi HN,<p>OpenAI recently released a model for automatic speech recognition called Whisper [0]. I decided to reimplement the inference of the model from scratch using C/C++. To achieve this I implemented a minimalistic tensor library in C and ported the high-level architecture of the model in C++. The entire code is less than 8000 lines of code and is contained in just 2 source files without any third-party dependencies. The Github project is here:<p><a href="https://github.com/ggerganov/whisper.cpp" rel="nofollow">https://github.com/ggerganov/whisper.cpp</a><p>With this implementation I can very easily build and run the model - <i>“make base.en”</i>. It also allows me to run it on a wide range of devices. For example, I have provided examples of running the model on an iPhone, Raspberry Pi 4 and even in a web page via WebAssembly!<p>The implementation runs fully on the CPU and utilizes FP16, AVX intrinsics on x86 architectures and NEON + Accelerate framework on Apple Silicon. The latter is especially efficient and I observe that the inference is about 2-3 times faster compared to the current PyTorch implementation provided by OpenAI when running it on my MacBook M1 Pro. The WASM port utilizes SIMD 128-bit intrinsics - a feature supported in some modern web browsers [1].<p>I am very happy with the performance that I observe on Apple Silicon devices. I didn’t expect that the Accelerate framework [2] (i.e. CBLAS) offers such a dramatic performance boost for matrix multiplications so I was very pleasantly surprised! To enable the framework in your C/C++ projects, all you have to do is add <i>`-framework Accelerate`</i> to your clang command-line flags.<p>This entire exercise of implementing the Whisper model was very interesting to me and helped me understand a lot about how the transformer architecture works. I also got a lot of positive feedback from people finding and using my project. We brainstormed on a lot of interesting tools that can potentially be created with this library (such as speech-to-text plugin for Vim, RPi4 voice assistant, WASM chat bot, etc). If interested, checkout the “Examples” section and the “Show and tell” discussions for some ideas!<p>Would love to know what you think about this project and about your experience with using the Accelerate framework in any of your projects. Cheers!<p>[0] <a href="https://github.com/openai/whisper" rel="nofollow">https://github.com/openai/whisper</a><p>[1] <a href="https://chromestatus.com/feature/6533147810332672" rel="nofollow">https://chromestatus.com/feature/6533147810332672</a><p>[2] <a href="https://developer.apple.com/documentation/accelerate" rel="nofollow">https://developer.apple.com/documentation/accelerate</a>

Show HN: Port of OpenAI's Whisper model in C/C++

Hi HN,<p>OpenAI recently released a model for automatic speech recognition called Whisper [0]. I decided to reimplement the inference of the model from scratch using C/C++. To achieve this I implemented a minimalistic tensor library in C and ported the high-level architecture of the model in C++. The entire code is less than 8000 lines of code and is contained in just 2 source files without any third-party dependencies. The Github project is here:<p><a href="https://github.com/ggerganov/whisper.cpp" rel="nofollow">https://github.com/ggerganov/whisper.cpp</a><p>With this implementation I can very easily build and run the model - <i>“make base.en”</i>. It also allows me to run it on a wide range of devices. For example, I have provided examples of running the model on an iPhone, Raspberry Pi 4 and even in a web page via WebAssembly!<p>The implementation runs fully on the CPU and utilizes FP16, AVX intrinsics on x86 architectures and NEON + Accelerate framework on Apple Silicon. The latter is especially efficient and I observe that the inference is about 2-3 times faster compared to the current PyTorch implementation provided by OpenAI when running it on my MacBook M1 Pro. The WASM port utilizes SIMD 128-bit intrinsics - a feature supported in some modern web browsers [1].<p>I am very happy with the performance that I observe on Apple Silicon devices. I didn’t expect that the Accelerate framework [2] (i.e. CBLAS) offers such a dramatic performance boost for matrix multiplications so I was very pleasantly surprised! To enable the framework in your C/C++ projects, all you have to do is add <i>`-framework Accelerate`</i> to your clang command-line flags.<p>This entire exercise of implementing the Whisper model was very interesting to me and helped me understand a lot about how the transformer architecture works. I also got a lot of positive feedback from people finding and using my project. We brainstormed on a lot of interesting tools that can potentially be created with this library (such as speech-to-text plugin for Vim, RPi4 voice assistant, WASM chat bot, etc). If interested, checkout the “Examples” section and the “Show and tell” discussions for some ideas!<p>Would love to know what you think about this project and about your experience with using the Accelerate framework in any of your projects. Cheers!<p>[0] <a href="https://github.com/openai/whisper" rel="nofollow">https://github.com/openai/whisper</a><p>[1] <a href="https://chromestatus.com/feature/6533147810332672" rel="nofollow">https://chromestatus.com/feature/6533147810332672</a><p>[2] <a href="https://developer.apple.com/documentation/accelerate" rel="nofollow">https://developer.apple.com/documentation/accelerate</a>

Show HN: Codeium – a free, fast AI codegen extension

I'm Varun, CEO of Exafunction, and we just released Codeium to open up access of generative AI to all developers for free. In the spirit of Show HN, we created a playground version for anyone to try this tech in the browser (click Try in Browser)!<p>We have built scalable, low-latency ML infra for many top AI companies in the past, and we are excited to leverage that tech into a product that we, as developers, would love. We hope that you do too, and we would appreciate any feedback that this community has for us!

Show HN: TinyUX – Grid based low-fi wireframing on your mobile phone

You tap icons to create a wireframe, or do some visual brainstorming. Then export the image to share on Slack, Gitlab, etc.<p>I wanted to work on neural net that interprets an imported image of a wireframe, to then manipulate it inside an app. Figured it would be best to first build the wireframing app. So I created TinyUX.<p>It's released as quickly as possible this influenced some choices:<p>- While created in React Native, it's Android only. - It's paid only (~$5). While freemium might make more sense, this was quicker to release, since in-app-purchases in RN is not trivial. First app I created that's not free, so that's an experiment too. - There are no online features, all is stored on the device.<p>Looking to validate with UX designers, but all feedback is welcome.

Show HN: TinyUX – Grid based low-fi wireframing on your mobile phone

You tap icons to create a wireframe, or do some visual brainstorming. Then export the image to share on Slack, Gitlab, etc.<p>I wanted to work on neural net that interprets an imported image of a wireframe, to then manipulate it inside an app. Figured it would be best to first build the wireframing app. So I created TinyUX.<p>It's released as quickly as possible this influenced some choices:<p>- While created in React Native, it's Android only. - It's paid only (~$5). While freemium might make more sense, this was quicker to release, since in-app-purchases in RN is not trivial. First app I created that's not free, so that's an experiment too. - There are no online features, all is stored on the device.<p>Looking to validate with UX designers, but all feedback is welcome.

Show HN: TinyUX – Grid based low-fi wireframing on your mobile phone

You tap icons to create a wireframe, or do some visual brainstorming. Then export the image to share on Slack, Gitlab, etc.<p>I wanted to work on neural net that interprets an imported image of a wireframe, to then manipulate it inside an app. Figured it would be best to first build the wireframing app. So I created TinyUX.<p>It's released as quickly as possible this influenced some choices:<p>- While created in React Native, it's Android only. - It's paid only (~$5). While freemium might make more sense, this was quicker to release, since in-app-purchases in RN is not trivial. First app I created that's not free, so that's an experiment too. - There are no online features, all is stored on the device.<p>Looking to validate with UX designers, but all feedback is welcome.

Show HN: Domain Name Search with AI

In my exploration of OpenAI, I just created a domain-name search that takes business description as an input, and generates interesting domain names for it. It then uses DNSimple API to check if .com is available.<p>In my view it is a much easier way to find a suitable domain, as the AI thinks of a much large pool of possible names than my own brain. SmartyNames found its own name, using the tool itself.<p>Hope you enjoy it! <a href="https://smartynames.com/" rel="nofollow">https://smartynames.com/</a>

Show HN: Domain Name Search with AI

In my exploration of OpenAI, I just created a domain-name search that takes business description as an input, and generates interesting domain names for it. It then uses DNSimple API to check if .com is available.<p>In my view it is a much easier way to find a suitable domain, as the AI thinks of a much large pool of possible names than my own brain. SmartyNames found its own name, using the tool itself.<p>Hope you enjoy it! <a href="https://smartynames.com/" rel="nofollow">https://smartynames.com/</a>

Show HN: Domain Name Search with AI

In my exploration of OpenAI, I just created a domain-name search that takes business description as an input, and generates interesting domain names for it. It then uses DNSimple API to check if .com is available.<p>In my view it is a much easier way to find a suitable domain, as the AI thinks of a much large pool of possible names than my own brain. SmartyNames found its own name, using the tool itself.<p>Hope you enjoy it! <a href="https://smartynames.com/" rel="nofollow">https://smartynames.com/</a>

Tell HN: My child's first program

Last night, I introduced my kid to programming. We'd done some stuff with Mindstorms before, but she never really caught the bug for it. But for some reason, last night when I showed her some simple Python scripting to solve math problems and write to the console, she was enthralled.<p>After guiding her through a few things, she took the laptop off for a while and then came back with her first program, giggling like a maniac<p><pre><code> you='WOW!!!' fart='So many poops!' print(you,fart) </code></pre> I'm pretty proud :D

Show HN: Controversial quiz game generated by ChatGPT

Show HN: GPT-3 powered service thats helps you send more humane emails

Show HN: I recreated Coursera with 150 free YouTube tutorials

< 1 2 3 ... 487 488 489 490 491 ... 850 851 852 >