The best Hacker News stories from Show from the past day

Go back

Latest posts:

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)

Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).<p>There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.<p>But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.<p>So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".<p>The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.<p>One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?<p>Please try it out! Setup is one component or one import.<p>(And don't tell me it's a terrible idea because I already know it is)<p>package: <a href="https://www.npmjs.com/package/@fuzzycanary/core" rel="nofollow">https://www.npmjs.com/package/@fuzzycanary/core</a> gh: <a href="https://github.com/vivienhenz24/fuzzy-canary" rel="nofollow">https://github.com/vivienhenz24/fuzzy-canary</a>

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)

Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).<p>There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.<p>But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.<p>So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".<p>The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.<p>One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?<p>Please try it out! Setup is one component or one import.<p>(And don't tell me it's a terrible idea because I already know it is)<p>package: <a href="https://www.npmjs.com/package/@fuzzycanary/core" rel="nofollow">https://www.npmjs.com/package/@fuzzycanary/core</a> gh: <a href="https://github.com/vivienhenz24/fuzzy-canary" rel="nofollow">https://github.com/vivienhenz24/fuzzy-canary</a>

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)

Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).<p>There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.<p>But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.<p>So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".<p>The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.<p>One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?<p>Please try it out! Setup is one component or one import.<p>(And don't tell me it's a terrible idea because I already know it is)<p>package: <a href="https://www.npmjs.com/package/@fuzzycanary/core" rel="nofollow">https://www.npmjs.com/package/@fuzzycanary/core</a> gh: <a href="https://github.com/vivienhenz24/fuzzy-canary" rel="nofollow">https://github.com/vivienhenz24/fuzzy-canary</a>

Show HN: GitForms – Zero-cost contact forms using GitHub Issues as database

got tired of paying $29–99/month for simple contact forms on landing pages and side projects (Typeform, Tally, etc.).So I built GitForms: an open-source contact form that stores submissions as GitHub Issues.How it works:Form runs on your Next.js 14 site (Tailwind + TypeScript) On submit → creates a new Issue in your repo via GitHub API You get instant email notifications from GitHub (free)<p>Zero ongoing costs:No database, no backend servers Deploy on Vercel/Netlify free tier in minutes Configurable via JSON (themes, text, multi-language)<p>Perfect for MVPs, landing pages, portfolios, or any low-volume use case.Repo: <a href="https://github.com/Luigigreco/gitforms" rel="nofollow">https://github.com/Luigigreco/gitforms</a> License: CC-BY-NC-SA-4.0 (non-commercial only – fine for personal projects, not client work).Curious what HN thinks: would you use this? Any obvious improvements or edge cases I missed?Thanks!

Show HN: GitForms – Zero-cost contact forms using GitHub Issues as database

got tired of paying $29–99/month for simple contact forms on landing pages and side projects (Typeform, Tally, etc.).So I built GitForms: an open-source contact form that stores submissions as GitHub Issues.How it works:Form runs on your Next.js 14 site (Tailwind + TypeScript) On submit → creates a new Issue in your repo via GitHub API You get instant email notifications from GitHub (free)<p>Zero ongoing costs:No database, no backend servers Deploy on Vercel/Netlify free tier in minutes Configurable via JSON (themes, text, multi-language)<p>Perfect for MVPs, landing pages, portfolios, or any low-volume use case.Repo: <a href="https://github.com/Luigigreco/gitforms" rel="nofollow">https://github.com/Luigigreco/gitforms</a> License: CC-BY-NC-SA-4.0 (non-commercial only – fine for personal projects, not client work).Curious what HN thinks: would you use this? Any obvious improvements or edge cases I missed?Thanks!

Show HN: I built a WebMIDI sequencer to control my hardware synths

Hey HN,<p>I’m an ex-Google engineer trying to get back into music production.<p>I needed a way to sequence my hardware synths using AI contexts without constantly switching windows, so I built this.<p>It runs entirely in the browser using WebMIDI. No login required. It connects to your local MIDI devices (if you're on Chrome/Edge) and lets you generate patterns.<p>Tech stack: [React / WebMIDI API / etc].<p>Link: www.simplychris.ai/droplets<p>Code is a bit messy, but it works. Feedback welcome.

Show HN: Titan – JavaScript-first framework that compiles into a Rust server

Hi HN,<p>I built Titan, a backend framework where you write routes and logic in JavaScript, and the CLI compiles everything into a single Rust + Axum binary using the Boa JS engine. No Node.js is required in production.<p>The idea is to keep JS developer experience while getting Rust performance and a self-contained deployable server.<p>Current features:<p>JS route DSL<p>Action system mapped to Rust<p>esbuild bundling<p>Generated Rust server with Axum<p>Hot-reload dev server<p>Single-binary output<p>Repo: <a href="https://github.com/ezet-galaxy/-ezetgalaxy-titan" rel="nofollow">https://github.com/ezet-galaxy/-ezetgalaxy-titan</a><p>Would love feedback on the architecture, DX, and whether this hybrid JS→Rust approach is useful.<p>Thanks for reading!

Show HN: High-Performance Wavelet Matrix for Python, Implemented in Rust

I built a Rust-powered Wavelet Matrix library for Python.<p>There were surprisingly few practical Wavelet Matrix implementations available for Python, so I implemented one with a focus on performance, usability, and typed APIs. It supports fast rank/select, top-k, quantile, range queries, and even dynamic updates.<p>Feedback welcome!

Show HN: High-Performance Wavelet Matrix for Python, Implemented in Rust

I built a Rust-powered Wavelet Matrix library for Python.<p>There were surprisingly few practical Wavelet Matrix implementations available for Python, so I implemented one with a focus on performance, usability, and typed APIs. It supports fast rank/select, top-k, quantile, range queries, and even dynamic updates.<p>Feedback welcome!

Show HN: High-Performance Wavelet Matrix for Python, Implemented in Rust

I built a Rust-powered Wavelet Matrix library for Python.<p>There were surprisingly few practical Wavelet Matrix implementations available for Python, so I implemented one with a focus on performance, usability, and typed APIs. It supports fast rank/select, top-k, quantile, range queries, and even dynamic updates.<p>Feedback welcome!

Show HN: I built a fast RSS reader in Zig

Well, I certainly tried. I had to, because it has a certain quirk inspired by "digital minimalism."<p>The quirk is that it only allows you to fetch new articles once per day (or X days).<p>Why? Let me explain...<p>I want my internet content to be like a boring newspaper. You get it in the morning, and you read the whole thing while sipping your morning coffee, and then you're done! No more new information for today. No pings, no alerts, peace, quiet, zen, etc.<p>But with that, I needed it to be able to fetch all articles from my hundreds of feeds in one sitting. This is where Zig and curl optimisations come in. I tried to do all the tricks in the book. If I missed something, let me know!<p>First off, I'm using curl multi for the network layer. The cool thing is it automatically does HTTP/2 multiplexing, which means if your feeds are hosted on the same CDN it reuses the same connection. I've got it configured to handle 50 connections total with up to 6 per host, which seems to be the sweet spot before servers start getting suspicious. Also, conditional GETs. If a feed hasn't changed since last time, the server just says "Not Modified" and we bail immediately.<p>While curl is downloading feeds, I wouldn't want CPU just being idle so the moment curl finishes downloading a single feed, it fires a callback that immediately throws the XML into a worker thread pool for parsing. The main thread keeps managing all the network stuff while worker threads are chewing through XML in parallel. Zig's memory model is perfect for this. Each feed gets its own ArenaAllocator, which is basically a playground where you can allocate strings during parsing, then when we're done, we just nuke the entire arena in one go.<p>For parsing itself, I'm using libexpat because it doesn't load the entire XML into memory like a DOM parser would. This matters because some podcast feeds especially are like 10MB+ of XML. So with smart truncation we download the first few X mb's (configurable), scan backwards to find the last complete item tag, cut it there, and parse just that. Keeps memory usage sane even when feed sizes get massive.<p>And for the UI I just pipe everything to the system's "less" command. You get vim navigation, searching, and paging for free. Plus I'm using OSC 8 hyperlinks, so you can actually click links to open them on your browser. Zero TUI framework needed. I've also included OPML import/export and feed groups as additional features.<p>The result: content from hundreds of RSS feeds retrieved in matter of seconds, and peace of mind for the rest of the day.<p>The code is open source and MIT licensed. If you have ideas on how to make it even faster or better, comment below. Feature requests and other suggestions are also welcome, here or GitHub.

Show HN: I built a fast RSS reader in Zig

Well, I certainly tried. I had to, because it has a certain quirk inspired by "digital minimalism."<p>The quirk is that it only allows you to fetch new articles once per day (or X days).<p>Why? Let me explain...<p>I want my internet content to be like a boring newspaper. You get it in the morning, and you read the whole thing while sipping your morning coffee, and then you're done! No more new information for today. No pings, no alerts, peace, quiet, zen, etc.<p>But with that, I needed it to be able to fetch all articles from my hundreds of feeds in one sitting. This is where Zig and curl optimisations come in. I tried to do all the tricks in the book. If I missed something, let me know!<p>First off, I'm using curl multi for the network layer. The cool thing is it automatically does HTTP/2 multiplexing, which means if your feeds are hosted on the same CDN it reuses the same connection. I've got it configured to handle 50 connections total with up to 6 per host, which seems to be the sweet spot before servers start getting suspicious. Also, conditional GETs. If a feed hasn't changed since last time, the server just says "Not Modified" and we bail immediately.<p>While curl is downloading feeds, I wouldn't want CPU just being idle so the moment curl finishes downloading a single feed, it fires a callback that immediately throws the XML into a worker thread pool for parsing. The main thread keeps managing all the network stuff while worker threads are chewing through XML in parallel. Zig's memory model is perfect for this. Each feed gets its own ArenaAllocator, which is basically a playground where you can allocate strings during parsing, then when we're done, we just nuke the entire arena in one go.<p>For parsing itself, I'm using libexpat because it doesn't load the entire XML into memory like a DOM parser would. This matters because some podcast feeds especially are like 10MB+ of XML. So with smart truncation we download the first few X mb's (configurable), scan backwards to find the last complete item tag, cut it there, and parse just that. Keeps memory usage sane even when feed sizes get massive.<p>And for the UI I just pipe everything to the system's "less" command. You get vim navigation, searching, and paging for free. Plus I'm using OSC 8 hyperlinks, so you can actually click links to open them on your browser. Zero TUI framework needed. I've also included OPML import/export and feed groups as additional features.<p>The result: content from hundreds of RSS feeds retrieved in matter of seconds, and peace of mind for the rest of the day.<p>The code is open source and MIT licensed. If you have ideas on how to make it even faster or better, comment below. Feature requests and other suggestions are also welcome, here or GitHub.

Show HN: I built a fast RSS reader in Zig

Well, I certainly tried. I had to, because it has a certain quirk inspired by "digital minimalism."<p>The quirk is that it only allows you to fetch new articles once per day (or X days).<p>Why? Let me explain...<p>I want my internet content to be like a boring newspaper. You get it in the morning, and you read the whole thing while sipping your morning coffee, and then you're done! No more new information for today. No pings, no alerts, peace, quiet, zen, etc.<p>But with that, I needed it to be able to fetch all articles from my hundreds of feeds in one sitting. This is where Zig and curl optimisations come in. I tried to do all the tricks in the book. If I missed something, let me know!<p>First off, I'm using curl multi for the network layer. The cool thing is it automatically does HTTP/2 multiplexing, which means if your feeds are hosted on the same CDN it reuses the same connection. I've got it configured to handle 50 connections total with up to 6 per host, which seems to be the sweet spot before servers start getting suspicious. Also, conditional GETs. If a feed hasn't changed since last time, the server just says "Not Modified" and we bail immediately.<p>While curl is downloading feeds, I wouldn't want CPU just being idle so the moment curl finishes downloading a single feed, it fires a callback that immediately throws the XML into a worker thread pool for parsing. The main thread keeps managing all the network stuff while worker threads are chewing through XML in parallel. Zig's memory model is perfect for this. Each feed gets its own ArenaAllocator, which is basically a playground where you can allocate strings during parsing, then when we're done, we just nuke the entire arena in one go.<p>For parsing itself, I'm using libexpat because it doesn't load the entire XML into memory like a DOM parser would. This matters because some podcast feeds especially are like 10MB+ of XML. So with smart truncation we download the first few X mb's (configurable), scan backwards to find the last complete item tag, cut it there, and parse just that. Keeps memory usage sane even when feed sizes get massive.<p>And for the UI I just pipe everything to the system's "less" command. You get vim navigation, searching, and paging for free. Plus I'm using OSC 8 hyperlinks, so you can actually click links to open them on your browser. Zero TUI framework needed. I've also included OPML import/export and feed groups as additional features.<p>The result: content from hundreds of RSS feeds retrieved in matter of seconds, and peace of mind for the rest of the day.<p>The code is open source and MIT licensed. If you have ideas on how to make it even faster or better, comment below. Feature requests and other suggestions are also welcome, here or GitHub.

Show HN: Learn Japanese contextually while browsing

Hi HN,<p>Just wanted to share a tool i've been working on to help with my own study routine. It’s a browser extension called Lingoku.<p>The idea is simple: we spend hours browsing the web in English every day. This tool replaces some of the english words with Japanese vocabulary based on your japanese level (Similar to Toucan, but with a better user experience).<p>It’s basically an attempt to make the "i+1" method actually passive, you understand the sentence because it's mostly english, but you pick up Japanese words naturally from the context. It uses an LLM in the backend to make sure the translations fit the context (so it distinguishes between different meanings of the same word).<p>since it uses paid AI APIs for the words replacement, I couldn't make it 100% free (server costs are real, unfortunately). However, there is a "forever free" plan with daily credits that doesn't require a credit card. it should be enough for casual daily browsing.<p>I built this because I struggle with Anki burnout and wanted a way to review words without feeling like i am "studying"<p>It supports Chrome, Edge, and Firefox now. would love any feedback or feature requests!<p><a href="https://lingoku.ai/learn-japanese" rel="nofollow">https://lingoku.ai/learn-japanese</a>

Show HN: My Tizen multiplayer drawing game flopped, but then hit 100M drawings

Hi HN,<p>I built the first version of Drawize back in late 2016 specifically for a Samsung Tizen OS app contest. I crunched and built the whole thing (including the real-time multiplayer engine) in under 4 weeks.<p>It didn’t win anything in the contest.<p>Since it was built with web tech anyway, I published it on the open web in early 2017 just to see what would happen. It started living its own life, and today — 8 years later — the database processed the 100,000,000th drawing.<p>On the busiest days it’s been 30k+ active users, and storing 100M drawings currently sits at ~3.16 TB.<p>The milestone moment: I was watching live logs today, terrified the 100Mth drawing would be NSFW. Luckily, the RNG gods smiled and it turned out to be a Red Balloon (You can see the 100Mth drawing here: <a href="https://www.drawize.com/blog/100-million-drawings-milestone" rel="nofollow">https://www.drawize.com/blog/100-million-drawings-milestone</a>)<p>Tech stack (boring but fast):<p>Backend: .NET + WebSockets (real-time sync)<p>Frontend: hand-coded HTML/JS + jQuery (no React, no bundlers)<p>Data: PostgreSQL & MongoDB<p>Storage: Wasabi Cloud (moved there to save on S3 costs)<p>Scaling as a solo dev: real-time lobbies + reconnection edge cases + moderation/content filtering. I use content classification models trained in 2021 to filter bad content, and the real-time multiplayer side is mostly highly optimized .NET code.<p>Happy to answer questions about the “failed” Tizen origin, real-time multiplayer on the web, moderation, or how .NET handles the load.

Show HN: Zenflow – orchestrate coding agents without "you're right" loops

Hi HN, I’m Andrew, Founder of Zencoder.<p>While building our IDE extensions and cloud agents, we ran into the same issue many of you likely face when using coding agents in complex repos: agents getting stuck in loops, apologizing, and wasting time.<p>We tried to manage this with scripts, but juggling terminal windows and copy-paste prompting was painful. So we built Zenflow, a free desktop tool to orchestrate AI coding workflows.<p>It handles the things we were missing in standard chat interfaces:<p>Cross-Model Verification: You can have Codex review Claude’s code, or run them in parallel to see which model handles the specific context better.<p>Parallel Execution: Run five different approaches on a backlog item simultaneously—mix "Human-in-the-Loop" for hard problems with "YOLO" runs for simple tasks.<p>Dynamic Workflows: Configured via simple .md files. Agents can actually "rewire" the next steps of the workflow dynamically based on the problem at hand.<p>Project list/kanban views across all workload<p>What we learned building this<p>To tune Zenflow, we ran 100+ experiments across public benchmarks (SWE-Bench-*, T-Bench) and private datasets. Two major takeaways that might interest this community:<p>Benchmark Saturation: Models are becoming progressively overtrained on all versions of SWE-Bench (even Pro). We found public results are diverging significantly from performance on private datasets. If you are building workflows, you can't rely on public benches.<p>The "Goldilocks" Workflow: In autonomous mode, heavy multi-step processes often multiply errors rather than fix them. Massive, complex prompt templates look good on paper but fail in practice. The most reliable setups landed in a narrow “Goldilocks” zone of just enough structure without over-orchestration.<p>The app is free to use and supports Claude Code, Codex, Gemini, and Zencoder.<p>We’ve been dogfooding this heavily, but I'd love to hear your thoughts on the default workflows and if they fit your mental model for agentic coding.<p>Download: <a href="https://zencoder.ai/zenflow" rel="nofollow">https://zencoder.ai/zenflow</a> YT flyby: <a href="https://www.youtube.com/watch?v=67Ai-klT-B8" rel="nofollow">https://www.youtube.com/watch?v=67Ai-klT-B8</a>

Show HN: Solving the ~95% legislative coverage gap using LLM's

Hi HN, I'm Jacek, the solo founder behind this project (Lustra).<p>The Problem: 95% of legislation goes unnoticed because raw legal texts are unreadable. Media coverage is optimized for outrage, not insight.<p>The Solution. I built a digital public infrastructure that:<p>1. Ingests & Sterilizes: Parses raw bills (PDF/XML) from US & PL APIs. Uses LLMs (Vertex AI, temp=0, strict JSON) to strip political spin.<p>2. Civic Algorithm: The main feed isn't sorted by an editorial board. It's sorted by user votes ("Shadow Parliament"). What the community cares about rises to the top.<p>3. Civic Projects: An incubator for citizen legislation. Users submit drafts (like our <i>Human Preservation Act</i>), which are vetted by AI scoring and displayed with visual parity alongside government bills.<p>Tech Stack:<p>Frontend: Flutter (Web & Mobile Monorepo),<p>Backend: Firebase + Google Cloud Run,<p>AI: Vertex AI (Gemini 2.5 Flash),<p>License: PolyForm Noncommercial — source is available for inspection, learning, and non-commercial civic use. Commercial use would require a separate agreement.<p>I am looking for contributors. I have the US and Poland live. EU, UK, FR, DE in pipeline, partially available. I need help building Data Adapters for other parliaments (the core logic is country-agnostic). If you want to help audit the code or add a country, check the repo. The goal is to complete the database as much as possible with current funding.<p>Live App: <a href="https://lustra.news" rel="nofollow">https://lustra.news</a><p>Repo: <a href="https://github.com/fokdelafons/lustra" rel="nofollow">https://github.com/fokdelafons/lustra</a><p>Dev Log: <a href="https://lustrainitiative.substack.com" rel="nofollow">https://lustrainitiative.substack.com</a>

Show HN: Solving the ~95% legislative coverage gap using LLM's

Hi HN, I'm Jacek, the solo founder behind this project (Lustra).<p>The Problem: 95% of legislation goes unnoticed because raw legal texts are unreadable. Media coverage is optimized for outrage, not insight.<p>The Solution. I built a digital public infrastructure that:<p>1. Ingests & Sterilizes: Parses raw bills (PDF/XML) from US & PL APIs. Uses LLMs (Vertex AI, temp=0, strict JSON) to strip political spin.<p>2. Civic Algorithm: The main feed isn't sorted by an editorial board. It's sorted by user votes ("Shadow Parliament"). What the community cares about rises to the top.<p>3. Civic Projects: An incubator for citizen legislation. Users submit drafts (like our <i>Human Preservation Act</i>), which are vetted by AI scoring and displayed with visual parity alongside government bills.<p>Tech Stack:<p>Frontend: Flutter (Web & Mobile Monorepo),<p>Backend: Firebase + Google Cloud Run,<p>AI: Vertex AI (Gemini 2.5 Flash),<p>License: PolyForm Noncommercial — source is available for inspection, learning, and non-commercial civic use. Commercial use would require a separate agreement.<p>I am looking for contributors. I have the US and Poland live. EU, UK, FR, DE in pipeline, partially available. I need help building Data Adapters for other parliaments (the core logic is country-agnostic). If you want to help audit the code or add a country, check the repo. The goal is to complete the database as much as possible with current funding.<p>Live App: <a href="https://lustra.news" rel="nofollow">https://lustra.news</a><p>Repo: <a href="https://github.com/fokdelafons/lustra" rel="nofollow">https://github.com/fokdelafons/lustra</a><p>Dev Log: <a href="https://lustrainitiative.substack.com" rel="nofollow">https://lustrainitiative.substack.com</a>

Show HN: Interactive Common Lisp: An Enhanced REPL

I created this because sometimes I want more than rlwrap but less than emacs. icl aims to hit that middle sweet spot.<p>It's a terminal application with context-aware auto-complete, an interactive object inspector, auto-indentation, syntax colouring, persistent history, and much more. It uses sly to communicate with the child lisp process and aims to be compatible with any sly-supporting implementation. I hope others find it useful!

Show HN: Interactive Common Lisp: An Enhanced REPL

I created this because sometimes I want more than rlwrap but less than emacs. icl aims to hit that middle sweet spot.<p>It's a terminal application with context-aware auto-complete, an interactive object inspector, auto-indentation, syntax colouring, persistent history, and much more. It uses sly to communicate with the child lisp process and aims to be compatible with any sly-supporting implementation. I hope others find it useful!

< 1 2 3 4 5 ... 914 915 916 >