The best Hacker News stories from Show from the past day
Latest posts:
Show HN: I indexed 8,643 BSides talks across 227 chapters and 6 continents
Hi HN,<p>I'm Roland, and for the past few weeks, I've been building AllBSides — a directory of every BSides conference talk uploaded to YouTube. As of today, 8,643 talks from 5,927 speakers across 227 chapters in 68 countries. Combined runtime is 280 days. The transcripts come to about 60 million words.<p>The archive came together in stages:<p>1. Manually map every BSides chapter's YouTube channel
2. Pull every video and transcript from Supabase
3. Run each transcript through Haiku for tag extraction (tools, topics, difficulty, team, talk style, research method, and much more)
4. Run results through Sonnet for categorization and dedup
5. Final pass goes through Opus for verification
6. Do a manual verification - at one time, the pipeline showed over 16k AI suggestions for manual verification. Today, most are resolved.<p>Total LLM cost so far: about €200. The whole pipeline is rebuildable from scratch.<p>Each talk gets its own page with embedded video, full transcript, speakers, tags, and "related talks." Each tool/framework/protocol/standard mentioned across the corpus gets its own page (3,968 distinct technologies tracked).<p>Some interesting facts I gathered while building it:<p>-(A) The site is currently 94% bot traffic. Of that, about 80,000 hits/month are AI training crawlers (ClaudeBot, GPTBot, meta-externalagent). Within 7 days of the talks archive going live, all major AI labs had ingested the entire corpus. The discovery cascade was startling to watch in real time.<p>-(B) The taxonomy work was the hardest part. Distinguishing "tools" from "frameworks" from "protocols" from "concepts" sounds easy until you have 5,000 ambiguous extracted entities. The 3-tier LLM pipeline helped a lot — Haiku alone was too noisy, Opus alone was too expensive.<p>-(C) Top tools mentioned: Wireshark (343), PowerShell (342), Metasploit (332), Burp Suite (322), GitHub (296), VirusTotal (273), Docker (253), Splunk (251), Nmap (247), MITRE ATT&CK (237). The list reflects what BSides talks actually discuss, not what vendors curate.<p>-(D) May is the peak BSides month — 29 events, 17% of all events with dates.<p>-(E) The top 1% of talks (86 videos by view count) account for 51% of all viewership. The other 99% are deeply niche, often the only video record of a specific technique.<p>The stack is intentionally lean: Go, SQLite, vanilla JavaScript, BunnyCDN. Static rendering at build time. No frameworks, no client-side state. The site costs about €50/month to run.<p>The data behind this post and much more can be found in the site footer, under the link "stats".<p>Happy to answer questions about the data pipeline, the taxonomy decisions, or what the AI crawler patterns looked like as the archive went live. Feedback on what to build next is genuinely welcome — I'm a solo dev figuring this out as I go.<p>— Roland (parkado)
Show HN: Brainio – Markdown notepad that turns notes into visual mind maps
Show HN: I Built a Museum Exhibit
Show HN: I built a new word game, Wordtrak
Hi HN! Looking for feedback on this 1v1 and daily word dueling game I've built over the last few months.<p>Play here: <a href="https://wordtrak.com/" rel="nofollow">https://wordtrak.com/</a><p>Or on iOS here: <a href="https://apps.apple.com/us/app/wordtrak/id6760442363">https://apps.apple.com/us/app/wordtrak/id6760442363</a> (Android version soon!)
Show HN: Airbyte Agents – context for agents across multiple data sources
I’m Michel, co-founder and CEO of Airbyte (<a href="https://airbyte.com/" rel="nofollow">https://airbyte.com/</a>). We’ve spent the last six years building data connectors. Today we're launching Airbyte Agents (<a href="https://docs.airbyte.com/ai-agents/" rel="nofollow">https://docs.airbyte.com/ai-agents/</a>), a unified data layer for agents to discover information and take action across operational systems.<p>Here’s a quick walkthrough: <a href="https://www.youtube.com/watch?v=ZosDytyf1fg" rel="nofollow">https://www.youtube.com/watch?v=ZosDytyf1fg</a><p>As agents move into real workflows, they need access to more tools (e.g. Slack, Salesforce, Linear). That means a ton of API plumbing: authentication, pagination, filters, handling schema, and matching entities across systems.<p>Most MCPs don’t fix this. They’re thin wrappers over APIs, so agents inherit their weak primitives and still get it wrong most of the time, especially when working across tools.<p>An even deeper issue is that APIs assume you already know what to query (think endpoints, Object IDs, fields), whereas agents usually start one step earlier: they need first to discover what matters before they can even start reasoning.<p>So we built Airbyte Agents to be a context layer between your Agents and all of your data. The core of this is something we call Context Store: a data index optimized for agentic search, populated by our replication connectors. All that work on data connectors the last six years comes in handy here!<p>This gives agents a structured way to discover data, while still allowing them to read and write directly to the upstream system when needed.<p>What got us working on this was an insane trace from an agent we were migrating to our new SDK. It was supposed to answer "which customers are at risk of leaving this quarter?" The trace had 47 steps. Most were API calls. The agent first had to find a bunch of accounts, then map them to the right customers, then look for tickets, bla bla... and when the Agent finally responded, the answer sounded ok, but was wrong. Not only that, it was excruciatingly slow. So we had to do something about it.<p>That 47-step agent is one example of a question where Airbyte Agents does particularly well. Other examples: - “Show me all enterprise deals closing this month with open support tickets." - “Find every support ticket that doesn’t have a Github issue opened”<p>Some of these might sound simple, but the quality of the answer changes dramatically when the agent doesn’t have to assemble all that context at runtime.<p>Once we had an early version of the product, I spent a weekend building a benchmark harness to see if it worked. Also for fun, I like writing benchmarks :). I compared calling the Airbyte Agent MCP vs calling a bunch of vendor MCPs directly. I tested retrieval, and search.<p>For the sake of simplicity, I used token consumption as a unit of measure. I think that’s a good proxy for how well agents are working. A failing agent (like the one that took 47 steps), will churn through lots of tokens while getting nowhere, while a successful one will get straight to the point.<p>Here's what I found when measuring: for Gong, it used up to 80% fewer tokens than their own MCP, for Zendesk up to 90% fewer, for Linear up to 75%, and for Salesforce up to 16% (Salesforce’s own SOQL does a good job here).<p>Of course there is the usual obvious bias: we are the builders of what we are benchmarking. So we made the test harness public: <a href="https://github.com/airbytehq/airbyte-agents-benchmarks" rel="nofollow">https://github.com/airbytehq/airbyte-agents-benchmarks</a>. Feel free to poke at it, and please tell us what you find if you do!<p>It's still early and some parts are rough, but we wanted to share this with the community asap. We'd love to hear from people building agents:
- Are you indexing data ahead of time, or letting the agent call APIs live?
- How are you matching entities across systems?<p>Would also love to hear any thoughts, comments, or ideas of how we could make this better, and if there are obvious things we’re missing. For now, we’re excited to keep building!
Show HN: Explore color palettes inspired by 3000 master painter artworks
I built PaletteInspiration.com, a browsable archive of color palettes pulled from artworks by 3,000+ master painters (Monet, Vermeer, Raphael, Van Gogh).
Why I built it: every color palette generator I tried converged on the same five muted pastels. Painters spent centuries figuring out color and we mostly ignore that body of work when picking colors for digital design.
Please share your feedback on the Color Harmony Explorer - drag the wheel to any color and it shows which hues master painters historically paired with it (not only standard complementary, analogous, triadic, etc.) It is solely based on co-occurrence across thousands of real paintings. Not algorithmic color theory rules - actual empirical pairings.<p>No signup, no paywall, no email capture. Just curious what people think.
Show HN: Parrot – a fun, skeuomorphic audio recorder to hear yourself
Hello HN,<p>This is my first Show HN and hopefully a fun one for y'all.<p>Parrot is a web app for easily recording throwaway audio clips. It was originally intended for pronunciation practice but may find other uses.<p>I got the idea to work on Parrot after reading the Launch HN of Issen [1]. Thinking about ways to help language learners improve their pronunciation, I remembered an easy method I've humbled myself with in the past: listening to a recording of my own voice.<p>The idea is to repeatedly record and listen to yourself, adjusting your pronunciation until you get it right. What makes Parrot different from other audio recording apps is that it doesn't save a log of all these throwaway audio clips that you then have to clean up. A recording only exists until it is overwritten by a new one (all "offline" and strictly local to your own device, of course).<p>That seems like a scant premise to justify making a whole new app, but that small change really makes a big difference for this use case. Though I'm not sure I would have made it if that was the only reason; more of a practical excuse.<p>The main reason was for stupid fun. Once I imagined this music gear-like device I knew I wanted to actually make it, in all its skeuomorphic glory (only missing is the wooden table).<p>I don't want to spoil all the fun bits, so play around and see for yourself :)<p>On dark mode being too dark and "unusable": that's an intentional joke. Do try it if you haven't!<p>Tech-wise it's rather basic: a bit of HTML, lots of CSS, some plain JS. The difficulty was in getting all the details dialed in. My biggest takeaway:<p>Surprise surprise, testing and QA are so important! The number of embarrassing bugs and flaws I would have missed had I not tested across all browsers and platforms is surprisingly high. The most basic things you assume to be true might very well not be! (`audio.currentTime = 0.0;` sets audio's play head to the beginning, right? Not in Firefox it doesn't!) I 110% recommend manual testing at various points in development: some things you have to experience for yourself.<p>My hosted version of Parrot is not Free, but there's a GPL'd version with personal touches removed available to download [2]. Inside the tarball is also a standalone version, fully contained in a single HTML file (for use without localhost).<p>I'll conclude on a personal insight. Listening to recorded audio of your voice can help improve your speech (or singing!), yes. It also gets you used to the sound of your own voice, which I've found helps build confidence.<p>Happy to discuss :)<p>[1] <a href="https://news.ycombinator.com/item?id=44387828">https://news.ycombinator.com/item?id=44387828</a><p>[2] <a href="https://www.zkhrv.com/parrot/free-parrot.tar.xz" rel="nofollow">https://www.zkhrv.com/parrot/free-parrot.tar.xz</a>
Show HN: Software Engineer to Novelist: Writing a Book Like Coding
I just published my first book, Means and Motive. ( <a href="https://www.amazon.com/dp/B0GYCZJVGX" rel="nofollow">https://www.amazon.com/dp/B0GYCZJVGX</a> )<p>As a software engineer, I approached writing like a software project. I used familiar tools (Emacs and HTML) for the primary writing.<p>I built my own tool (EPublish) to transform the HTML manuscript into an .epub file, the source for the ebook version. And I wrote shell scripts to reliably and repeatably transform the .epub version into PDF files for the printed editions.<p>I wrote 'design' and 'architecture' docs, describing the world, key actors, and timelines. I kept a task list of chapters and key scenes that needed to be written, in priority order. Along the way, I kept my files version-controlled so I could see the progress of the novel and edit mercilessly, without worrying about keeping old text around in backup files should I want it back for some reason.<p>If you've thought about writing a book, I highly recommend it. There are many similarities to the software engineering process. You'll also gain a newfound appreciation of the design, layout, and typesetting world, exactly how much work goes into each book you read.
Show HN: nfsdiag – A NFS diagnostic application
Show HN: nfsdiag – A NFS diagnostic application
Show HN: I built a RISC-V emulator that runs DOOM
Demo: <a href="https://www.youtube.com/watch?v=f5uygzEmdLw" rel="nofollow">https://www.youtube.com/watch?v=f5uygzEmdLw</a><p>Hi HN,<p>I built a RISC-V emulator that implements the RV32IM instruction set and a minimal syscall interface to run DOOM. A few weeks ago, I got my first output with a simple hello world assembly program.<p>Since then I have been working tirelessly to get DOOM to run.<p>I needed to figure out how to run C programs first, and came across newlib, which allows the underlying environment to implement the syscall stubs one by one until the programs run.<p>I have also added ELF loading, but currently only a single `PT_LOAD` segment is supported.<p>To port DOOM, I used doomgeneric, which was quite convenient to get working once the required stubs were in place.<p>DOOM renders to a fixed area in memory (0x705FDD = VRAM_START):<p><pre><code> 0x7FFFFF +-------------------------------------+
| |
| QUEUE_SIZE (32 bytes) |
| |
0x7FFFDF +-------------------------------------+ <-- QUEUE_START
0x7FFFDE | QUEUE_READ_IDX |
0x7FFFDD | QUEUE_WRITE_IDX |
+-------------------------------------+
| |
| |
| VRAM (1,024,000 bytes) |
| |
| |
0x705FDD +-------------------------------------+ <-- STACK_START
| Stack |
| | |
| v |
| |
| ^ |
| | |
| Program data + Heap |
| |
0x000000 +-------------------------------------+
</code></pre>
I made a small linker script so that the entry point of a C program is at _start and virtual address is always 0. That kept the ELF loader code simple.<p>Inputs are written to the queue by rvcore which are then intercepted by DOOM running inside it.
Show HN: Stop playing my matchstick puzzles, start building your own in seconds
Show HN: Large Scale Article Extract of Newspapers 1730s-1960s
Hello HN, over the past 7 months I've spent nearly 3,000 hours on building SNEWPAPERS, the first historical newpaper archive with full-text extractions, nearly perfect OCR, a vast categorization taxonomy and of course with semantic and agentic search capabilities.<p>Problem:
I wanted to search through newspaper archives, but when I tried every service only lets you search for keywords and dates, and gives you back raw images of the papers, and too many of them with no context. A sea of noise.<p>Solution:
I taught machines how to read the newspapers and so far I've extracted the content from > 600k pages (about 5TB) from the Chronicling America collection. Problems I had to deal with were an infinite variety of layouts, font sizes, image scan qualities, resolutions, aspect ratios, navigating around the images on the page. I also had to figure out how to get OCR to be nearly perfect so people wouldn't hate reading the extracts. I stitched together a multi-model pipeline (layout tech, ocr tech, llm, vllm) with heuristics to go from layout -> segmentation -> classification. I put it all in OpenSearch / Postgres and made it semantically searchable and also put an agentic search tool on top that knows how to use the API really well and helps you write queries to find what you're looking for. Happy to discuss AWS architecture and scaling as well, that was tough!<p>If you have five minutes and you just want to jump in and have your own personalized experience, what I would suggest is:<p>Before searching for anything, go to the Sleuth page
Ask it about anything from 1736 to 1963, maybe 1 or 2 follow up questions
Then go to the search page so you can see the queries it wrote for you (bottom left "saved queries") and uncover more info on whatever it is you're interested in<p>If you think it's cool and you want to learn more, then there's about 10 minutes of video guides on the various capabilities in "Guide" on the nav bar<p>Some other people have also taken a crack at this, notably:<p><a href="https://dell-research-harvard.github.io/resources/americanstories" rel="nofollow">https://dell-research-harvard.github.io/resources/americanst...</a> (very good attempt)
<a href="https://labs.loc.gov/work/experiments/newspaper-navigator/" rel="nofollow">https://labs.loc.gov/work/experiments/newspaper-navigator/</a> (focused on images)
Show HN: Mljar Studio – local AI data analyst that saves analysis as notebooks
Hi HN,<p>I’ve been working on mljar-supervised (open-source AutoML for tabular data) for a few years. Recently I built a desktop app around it called MLJAR Studio.<p>The idea is simple: you talk to your data in natural language, the AI generates Python code, executes it locally, and the whole conversation becomes a reproducible notebook (*.ipynb file). So instead of just chatting with data, you end up with something you can inspect, modify, and rerun.<p>What MLJAR Studio does:<p>- Sets up a local Python environment automatically, runs on Mac, Windows, and Linux<p>- Installs missing packages during the conversation<p>- Built-in AutoML for tabular data (classification, regression, multiclass)<p>- Works with standard Python libraries (pandas, matplotlib, etc.)<p>- Works with any data file: CSV, Excel, Stata, Parquet ...<p>- Connects to PostgreSQL, MySQL, SQL Server, Snowflake, Databricks, and Supabase.<p>For AI: use Ollama locally (zero data egress), bring your own OpenAI key, or use MLJAR AI add-on.<p>I built this because I wanted something between Jupyter Notebook (flexible but manual) and AI tools that generate code but don’t preserve the workflow. Most tools I tried either hide too much or don’t give reproducible results and are cloud based<p>Demos:<p>- 60-second demo: <a href="https://youtu.be/BjxpZYRiY4c" rel="nofollow">https://youtu.be/BjxpZYRiY4c</a><p>- Full 3-minute analysis: <a href="https://youtu.be/1DHMMxaNJxI" rel="nofollow">https://youtu.be/1DHMMxaNJxI</a><p>Pricing is $199 one-time, with a 7-day trial.<p>Curious if this is useful for others doing real data work, or if I’m solving my own problem here.<p>Happy to answer questions.
Show HN: Mljar Studio – local AI data analyst that saves analysis as notebooks
Hi HN,<p>I’ve been working on mljar-supervised (open-source AutoML for tabular data) for a few years. Recently I built a desktop app around it called MLJAR Studio.<p>The idea is simple: you talk to your data in natural language, the AI generates Python code, executes it locally, and the whole conversation becomes a reproducible notebook (*.ipynb file). So instead of just chatting with data, you end up with something you can inspect, modify, and rerun.<p>What MLJAR Studio does:<p>- Sets up a local Python environment automatically, runs on Mac, Windows, and Linux<p>- Installs missing packages during the conversation<p>- Built-in AutoML for tabular data (classification, regression, multiclass)<p>- Works with standard Python libraries (pandas, matplotlib, etc.)<p>- Works with any data file: CSV, Excel, Stata, Parquet ...<p>- Connects to PostgreSQL, MySQL, SQL Server, Snowflake, Databricks, and Supabase.<p>For AI: use Ollama locally (zero data egress), bring your own OpenAI key, or use MLJAR AI add-on.<p>I built this because I wanted something between Jupyter Notebook (flexible but manual) and AI tools that generate code but don’t preserve the workflow. Most tools I tried either hide too much or don’t give reproducible results and are cloud based<p>Demos:<p>- 60-second demo: <a href="https://youtu.be/BjxpZYRiY4c" rel="nofollow">https://youtu.be/BjxpZYRiY4c</a><p>- Full 3-minute analysis: <a href="https://youtu.be/1DHMMxaNJxI" rel="nofollow">https://youtu.be/1DHMMxaNJxI</a><p>Pricing is $199 one-time, with a 7-day trial.<p>Curious if this is useful for others doing real data work, or if I’m solving my own problem here.<p>Happy to answer questions.
Show HN: Filling PDF forms with AI using client-side tool calling
Hey HN!<p>I built SimplePDF Copilot: an AI assistant that can interact with the PDF editor. It fills fields, answers questions, focuses on a specific field, adds fields, deletes pages, and so on.<p>It's built on top of SimplePDF that I started 7 years ago, pioneering privacy-respecting client-side pdf editing, now used monthly by 200k+ people.<p>As for the privacy model: the PDF itself never leaves the browser. Parsing, rendering, and field detection all run client-side.<p>The text the model needs (and your messages) goes to whatever LLM you point at. By default that's our demo proxy (DeepSeek V4 Flash, rate-capped), but you can BYOK and point it at any cloud provider, or go fully local (I've been testing with LM Studio).<p>Unlike the existing "Chat with PDF" tools that only retrieve the text/OCR layer, Copilot can act on the PDF: filling fields, adding fields (detected client-side using CommonForms by Joe Barrow [1], jbarrow on HN with some post-processing heuristics I added on top), focusing on fields, deleting pages, and so on.<p>I built this because SimplePDF is mostly used by healthcare customers where document privacy is paramount, and I wanted an AI experience that didn't require shipping PII to a third party.
Stack is pretty standard:<p>- Tanstack Start<p>- AI SDK from Vercel<p>- Tailwind (I personally prefer CSS modules, I'm old-school but the goal since I open source it, I figured that Tailwind would be a better fit)<p>The more interesting part is the client-side tool calling: events are passed back and forth via iframe postMessage.<p>If you're not familiar with "tool calling" and "client-side tool calling", a quick primer:<p>Tool calling is what LLMs use to take actions. When Claude runs grep or ls, or hits an MCP server, those are tool calls.<p>Client-side tool calling means the intent to call a tool comes from the LLM, but the execution happens in the browser.<p>That matters for: speed, you can't go faster than client-to-client operations and also gives you the ability to limit the data you expose to the LLM. For the demo I do feed the content of the document to the LLM, but that connection could be severed as simply as removing the tool that exposes the content data.<p>The demo is fully open source, available on Github [2] and the demo is the same as the link of this post [3]<p>What's not open source is SimplePDF itself (loaded as the iframe).<p>I could talk on and on about this, let me know if you have any questions, anything goes!<p>[1] <a href="https://github.com/jbarrow/commonforms" rel="nofollow">https://github.com/jbarrow/commonforms</a><p>[2] <a href="https://github.com/SimplePDF/simplepdf-embed/tree/main/copilot" rel="nofollow">https://github.com/SimplePDF/simplepdf-embed/tree/main/copil...</a><p>[3] <a href="https://copilot.simplepdf.com/?share=a7d00ad073c75a75d493228e6ff7b11eb3f2d945b6175913e87898ec96ca8076&form=w9&lang=en" rel="nofollow">https://copilot.simplepdf.com/?share=a7d00ad073c75a75d493228...</a>
Show HN: Piruetas – A self-hosted diary app I built for my girlfriend
I searched for a simple, self-hosted journal app for my girlfriend and everything I found was either too
complex, too feature-heavy, too feature-less for what I needed or required trusting a cloud service.<p>So I built Piruetas (it means pirouettes in Spanish - she chose the name btw).<p>It's a day-per-page diary with rich text editing, drag-and-drop image uploads, auto-save, public
share links, and a clean mobile UI.
It can be set up for Personal or Multi-user usage via docker compose deployment.<p>She seems to like it so I decided to give back to the community and make it available for everyone (after some QA)<p>Live demo: <a href="https://piruet.app" rel="nofollow">https://piruet.app</a> (login: demo / piruetas — data resets every 30 min!)
GitHub: <a href="https://github.com/patillacode/piruetas" rel="nofollow">https://github.com/patillacode/piruetas</a>
Show HN: Ableton Live MCP
Ever wanted to control Ableton with just your voice? Me too! I made this MCP server so I could just ask Codex to do anything in Ableton Live for me, while I was nap-trapped by my baby.<p>The chat messages I sent to Codex to make this:<p><i>in ableton, make a self reflective song, with audio vocals (via macos say) and chip tunes and 80's drum machines. should be a real edm banger<p>i want midi for everything but vocals please, with ableton devices. not prerendered audio for instruments<p>needs some fills<p>and should hit way harder after "3-2-1 i become the sound"<p>the vocals are squished too much (read too quickly), give them a little more length<p>add some dynamics, the song is basically one volume. and some pumping side chain<p>improve dynamics of the clap, seems a bit flat and indistinguished, want it harder after the 3-2-1 drop<p>introduce a new element on a new track after the 3-2-1 drop, that comes in but then recedes before the final exit<p>doesn't seem like the new thing has any notes<p>the element is a bit muddy/indistinct. perhaps it needs simplification and more space, different instrument choice, i dunno</i>
Show HN: Ableton Live MCP
Ever wanted to control Ableton with just your voice? Me too! I made this MCP server so I could just ask Codex to do anything in Ableton Live for me, while I was nap-trapped by my baby.<p>The chat messages I sent to Codex to make this:<p><i>in ableton, make a self reflective song, with audio vocals (via macos say) and chip tunes and 80's drum machines. should be a real edm banger<p>i want midi for everything but vocals please, with ableton devices. not prerendered audio for instruments<p>needs some fills<p>and should hit way harder after "3-2-1 i become the sound"<p>the vocals are squished too much (read too quickly), give them a little more length<p>add some dynamics, the song is basically one volume. and some pumping side chain<p>improve dynamics of the clap, seems a bit flat and indistinguished, want it harder after the 3-2-1 drop<p>introduce a new element on a new track after the 3-2-1 drop, that comes in but then recedes before the final exit<p>doesn't seem like the new thing has any notes<p>the element is a bit muddy/indistinct. perhaps it needs simplification and more space, different instrument choice, i dunno</i>
Show HN: Ableton Live MCP
Ever wanted to control Ableton with just your voice? Me too! I made this MCP server so I could just ask Codex to do anything in Ableton Live for me, while I was nap-trapped by my baby.<p>The chat messages I sent to Codex to make this:<p><i>in ableton, make a self reflective song, with audio vocals (via macos say) and chip tunes and 80's drum machines. should be a real edm banger<p>i want midi for everything but vocals please, with ableton devices. not prerendered audio for instruments<p>needs some fills<p>and should hit way harder after "3-2-1 i become the sound"<p>the vocals are squished too much (read too quickly), give them a little more length<p>add some dynamics, the song is basically one volume. and some pumping side chain<p>improve dynamics of the clap, seems a bit flat and indistinguished, want it harder after the 3-2-1 drop<p>introduce a new element on a new track after the 3-2-1 drop, that comes in but then recedes before the final exit<p>doesn't seem like the new thing has any notes<p>the element is a bit muddy/indistinct. perhaps it needs simplification and more space, different instrument choice, i dunno</i>