The best Hacker News stories from All from the past day

Go back

Latest posts:

Ask HN: Who is hiring? (November 2023)

Please state the location and include REMOTE, INTERNS and/or VISA when that sort of candidate is welcome. When remote work is <i>not</i> an option, include ONSITE.<p>Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does.<p>Commenters: please don't reply to job posts to complain about something. It's off topic here.<p>Readers: please only email if you are personally interested in the job.<p>Searchers: try <a href="https://www.remotenbs.com" rel="nofollow noreferrer">https://www.remotenbs.com</a>, <a href="https://hnjobs.u-turn.dev" rel="nofollow noreferrer">https://hnjobs.u-turn.dev</a>, <a href="https://hnresumetojobs.com" rel="nofollow noreferrer">https://hnresumetojobs.com</a>, <a href="https://hnhired.fly.dev" rel="nofollow noreferrer">https://hnhired.fly.dev</a>, <a href="https://kennytilton.github.io/whoishiring/" rel="nofollow noreferrer">https://kennytilton.github.io/whoishiring/</a>, <a href="https://hnjobs.emilburzo.com" rel="nofollow noreferrer">https://hnjobs.emilburzo.com</a>.<p>Don't miss these other fine threads:<p><i>Who wants to be hired?</i> <a href="https://news.ycombinator.com/item?id=38099084">https://news.ycombinator.com/item?id=38099084</a><p><i>Freelancer? Seeking freelancer?</i> <a href="https://news.ycombinator.com/item?id=38099085">https://news.ycombinator.com/item?id=38099085</a>

Improving deep sleep may prevent dementia, study finds

My rude-ass car

My rude-ass car

Cosmopolitan Third Edition

Cosmopolitan Third Edition

Copying Angry Birds with nothing but AI

Copying Angry Birds with nothing but AI

macOS Sonoma Boot Failures

macOS Sonoma Boot Failures

Firefox got faster for real users in 2023

Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context

Hi HN,<p>We’re excited to announce that Phind now defaults to our own model that matches and exceeds GPT-4’s coding abilities while running 5x faster. You can now get high quality answers for technical questions in 10 seconds instead of 50.<p>The current 7th-generation Phind Model is built on top of our open-source CodeLlama-34B fine-tunes that were the first models to beat GPT-4’s score on HumanEval and are still the best open source coding models overall by a wide margin: <a href="https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/bigcode/bigcode-models-leaderb...</a>.<p>This new model has been fine-tuned on an additional 70B+ tokens of high quality code and reasoning problems and exhibits a HumanEval score of 74.7%. However, we’ve found that HumanEval is a poor indicator of real-world helpfulness. After deploying previous iterations of the Phind Model on our service, we’ve collected detailed feedback and noticed that our model matches or exceeds GPT-4’s helpfulness most of the time on real-world questions. Many in our Discord community have begun using Phind exclusively with the Phind Model despite also having unlimited access to GPT-4.<p>One of the Phind Model’s key advantages is that it's very fast. We’ve been able to achieve a 5x speedup over GPT-4 by running our model on H100s using the new TensorRT-LLM library from NVIDIA. We can achieve up to 100 tokens per second single-stream while GPT-4 runs around 20 tokens per second at best.<p>Another key advantage of the Phind Model is context – it supports up to 16k tokens. We currently allow inputs of up to 12k tokens on the website and reserve the remaining 4k for web results.<p>There are still some rough edges with the Phind Model and we’ll continue improving it constantly. One area where it still suffers is consistency — on certain challenging questions where it is capable of getting the right answer, the Phind Model might take more generations to get to the right answer than GPT-4.<p>We’d love to hear your feedback.<p>Cheers,<p>The Phind Team

Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context

Hi HN,<p>We’re excited to announce that Phind now defaults to our own model that matches and exceeds GPT-4’s coding abilities while running 5x faster. You can now get high quality answers for technical questions in 10 seconds instead of 50.<p>The current 7th-generation Phind Model is built on top of our open-source CodeLlama-34B fine-tunes that were the first models to beat GPT-4’s score on HumanEval and are still the best open source coding models overall by a wide margin: <a href="https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/bigcode/bigcode-models-leaderb...</a>.<p>This new model has been fine-tuned on an additional 70B+ tokens of high quality code and reasoning problems and exhibits a HumanEval score of 74.7%. However, we’ve found that HumanEval is a poor indicator of real-world helpfulness. After deploying previous iterations of the Phind Model on our service, we’ve collected detailed feedback and noticed that our model matches or exceeds GPT-4’s helpfulness most of the time on real-world questions. Many in our Discord community have begun using Phind exclusively with the Phind Model despite also having unlimited access to GPT-4.<p>One of the Phind Model’s key advantages is that it's very fast. We’ve been able to achieve a 5x speedup over GPT-4 by running our model on H100s using the new TensorRT-LLM library from NVIDIA. We can achieve up to 100 tokens per second single-stream while GPT-4 runs around 20 tokens per second at best.<p>Another key advantage of the Phind Model is context – it supports up to 16k tokens. We currently allow inputs of up to 12k tokens on the website and reserve the remaining 4k for web results.<p>There are still some rough edges with the Phind Model and we’ll continue improving it constantly. One area where it still suffers is consistency — on certain challenging questions where it is capable of getting the right answer, the Phind Model might take more generations to get to the right answer than GPT-4.<p>We’d love to hear your feedback.<p>Cheers,<p>The Phind Team

Home schooling's rise from fringe to fastest-growing form of education

Home schooling's rise from fringe to fastest-growing form of education

German court prohibits LinkedIn from ignoring "Do Not Track" signals

Apple unveils M3, M3 Pro, and M3 Max

AI.gov

Global CO2 Levels

The Grug Brained Developer (2022)

< 1 2 3 ... 326 327 328 329 330 ... 847 848 849 >