Back to List
From Shooting Ranges to Computer Vision: How an iOS App Replaced Traditional Brass Plugs in Scoring
Product LaunchComputer VisioniOS DevelopmentMachine Learning

From Shooting Ranges to Computer Vision: How an iOS App Replaced Traditional Brass Plugs in Scoring

This article explores a unique intersection of traditional marksmanship and modern technology. The author, driven by a desire to hunt and cook venison from scratch, recounts their experience at a shooting range near Edinburgh. To streamline the tedious and manual process of scoring targets—which traditionally involves using physical brass plugs to determine shot placement—the author developed a sophisticated digital solution. By porting a 2012 OpenCV paper and training a state-of-the-art YOLOv8 computer vision model for iOS using CoreML, the author successfully automated the scoring ritual. This transition from manual inspection to mobile AI highlights the practical application of computer vision in niche hobbies, even if the technical overhead briefly delayed the ultimate goal of a home-cooked venison dinner.

Hacker News

Key Takeaways

  • Manual Scoring Challenges: Traditional target scoring requires physical brass plugs to determine if a shot touches a ring line, a process prone to physical strain and repetitive manual checks.
  • Technical Integration: The project involved porting a 2012 OpenCV paper and training a YOLOv8 model to automate visual recognition on iOS via CoreML.
  • Motivation through Culinary Passion: The drive to learn shooting was rooted in a deep obsession with cooking from scratch, including charcuterie and making garum from grasshoppers.
  • Environmental Context: The development took place amidst the specific conditions of a shooting range in Edinburgh, characterized by low ceilings and traditional scoring rituals.

In-Depth Analysis

The Transition from Brass to Bits

In the traditional setting of an Edinburgh shooting range, scoring is a meticulous "ritual." When a shot lands near a ring line, shooters must use a tray of brass plugs of various sizes. These plugs are inserted into the bullet hole to determine the final score based on where the flange sits relative to the target rings. The author identified this process, along with the physical hazards of the range—such as low ceiling beams—as a bottleneck in their journey toward learning to hunt. To eliminate the "score-counting-head-hitting-plug-pushing ritual," the author turned to mobile technology.

Engineering the Computer Vision Solution

The technical execution required bridging over a decade of research with modern mobile hardware. The author utilized a 2012 OpenCV paper as a foundational logic for target analysis and combined it with a state-of-the-art YOLOv8 (You Only Look Once) model. By leveraging Apple's CoreML framework, the model was optimized to run on an iPhone. This allowed the device to perform the task of the brass plug—detecting shot placement and calculating scores instantly through the camera lens, effectively digitizing a physical verification process that has remained unchanged for decades.

Industry Impact

This project demonstrates the increasing accessibility of high-end computer vision tools for individual developers. By successfully deploying YOLOv8 and OpenCV on a mobile device to solve a specific, niche problem like target scoring, it highlights the potential for AI to replace specialized physical tools (like brass plugs) in various hobbyist and professional fields. It also underscores the trend of "edge AI," where complex model inference is moved directly onto consumer hardware to provide real-time utility in environments without specialized equipment.

Frequently Asked Questions

Question: Why did the author decide to build an app instead of using brass plugs?

The author found the manual scoring process tedious and physically demanding, often involving hitting their head on low beams while checking cards. The app was designed to end the "ritual" of manual plug-pushing and speed up the progress tracking required for their hunting training.

Question: What specific technologies were used to create the scoring app?

The developer used OpenCV (based on a 2012 research paper), trained a YOLOv8 computer vision model, and implemented the solution on iOS using the CoreML framework.

Question: What was the ultimate goal behind learning to shoot?

The author's primary motivation was culinary. As an obsessive cook who makes their own garum and charcuterie, they wanted to learn to hunt to source and butchering whole venison carcasses from scratch.

Related News

Microsoft Edge Copilot Update: New AI Feature Enables Data Retrieval and Comparison Across Multiple Open Tabs
Product Launch

Microsoft Edge Copilot Update: New AI Feature Enables Data Retrieval and Comparison Across Multiple Open Tabs

Microsoft is introducing a significant update to its Edge browser, empowering the Copilot AI chatbot to access and process information from all currently open tabs. This enhancement allows users to move beyond single-page interactions, enabling the AI to answer questions based on the collective content of a browsing session. Key functionalities include the ability to compare products across different websites, summarize multiple open articles simultaneously, and provide insights derived from various sources at once. By breaking the context barrier of the active tab, Microsoft aims to streamline complex research and shopping workflows, positioning Copilot as a more integrated and context-aware productivity assistant within the Edge ecosystem.

Notion Transforms Workspace into AI Agent Hub with New Developer Platform Launch
Product Launch

Notion Transforms Workspace into AI Agent Hub with New Developer Platform Launch

Notion has announced a significant evolution of its productivity suite, launching a new developer platform that positions the workspace as a central hub for AI agents. This strategic move allows teams to integrate AI agents, external data sources, and custom code directly into their Notion environment. By facilitating these connections, Notion is making a definitive push into the emerging category of 'agentic productivity software.' The platform aims to bridge the gap between static documentation and active, automated workflows, enabling a more dynamic and connected user experience. This development marks a shift from traditional document management toward an AI-native operating system for modern teams.

Ardent Launches Instant Postgres Sandboxing to Enable Risk-Free Database Testing for AI Coding Agents
Product Launch

Ardent Launches Instant Postgres Sandboxing to Enable Risk-Free Database Testing for AI Coding Agents

Ardent (YC P26) has officially introduced its database branching platform, designed to provide developers and AI coding agents with instant, isolated Postgres sandboxes. By allowing users to create 1:1 copies of production databases in under six seconds, Ardent eliminates the risks associated with testing code on live data. The platform features a unique architecture where clones are isolated at both the compute and storage levels, ensuring zero impact on production performance. With extreme storage efficiency—charging only for data changes—and compute that autoscales to zero, Ardent addresses the scalability and cost challenges of traditional database replication. This launch aims to empower AI-native data teams to perform complex tasks like data cleaning, migration testing, and backfills with a "zero blast radius" approach.