White Paper v1.1 · December 2025
Tethra Systems LLC · Honolulu, Hawaiʻi
Traditional CRB response treats trees. SkyGuard tracks the infestation—at scale.
SkyGuard is a precision biosecurity platform that transforms how invasive species programs operate. By combining repeatable aerial surveillance, automated detection, and closed-loop outcome verification, SkyGuard enables programs to answer a question that has historically gone unanswered: Is the intervention working—at the population level?
Key Claims
- Scalable surveillance without linear labor growth
- Persistent identity linking observations and treatments over time
- Measurable outcomes with explicit denominators and confidence bounds
- Operational defensibility via immutable audit trails and evidence packets
- Continuous improvement through calibration sampling and outcome learning
The Opportunity
CRB threatens $500M-$1B in economic damage across Hawaii and Pacific territories. USDA is ending direct CRB funding, creating urgency for cost-effective alternatives. A strategic seed investment funds grant capture and business development to unlock non-dilutive federal funding and a beachhead municipal contract.
Implementation-ready: Scoped to sprint-level detail with an identified beachhead customer (City & County of Honolulu), direct access to Mayor's office, and validated build estimates. See Market Opportunity and Appendix I.
This document is confidential and intended for authorized distribution only.
Executive Summary
The Thesis
The Coconut Rhinoceros Beetle (CRB) represents one of the most significant biosecurity threats facing Hawaiʻi's palm populations. Current response programs share a common architecture: inspect individual trees, confirm damage, treat confirmed cases. This tree-centric approach has predictable failure modes at scale—coverage gaps, no persistence, no verification, and no population intelligence.
SkyGuard inverts this model. Instead of treating trees one at a time based on manual inspection, SkyGuard tracks the infestation itself through systematic aerial surveillance, automated detection, and verified outcomes. The result is a program that can scale coverage without scaling headcount, and—critically—can demonstrate whether interventions are working.
What Changes
| Traditional Approach | SkyGuard Approach |
|---|---|
| Complaint-driven sampling | Systematic survey coverage |
| No persistent palm IDs | Registry with stable identifiers |
| Single-indicator detection | Multi-modal fusion scoring |
| Per-palm inspection gates | Threshold-based treatment queuing |
| Treatment assumed effective | Imaging-verified outcomes |
| No learning loop | Calibration sampling improves accuracy |
Treatment Doctrine
SkyGuard is designed for contexts where the dominant risk is uncontrolled spread. In this framing, under-treatment is the failure mode. The system is calibrated to favor recall over precision—treating more suspected palms is generally preferable to missing infestations that seed new clusters.
The Opportunity
The immediate beachhead is municipal CRB response in Hawaiʻi. The City & County of Honolulu represents a single, well-defined customer with active biosecurity coordination needs and an existing relationship with Tethra Systems—including direct access to the Mayor's office.
The broader market is substantial: CRB threatens $500M-$1B in economic damage across Hawaii and Pacific territories over the next decade. The total addressable market for Pacific biosecurity surveillance exceeds $150M annually, spanning Hawaii (4 counties), US territories (Guam, CNMI, American Samoa), and COFA states (Palau, FSM, Marshall Islands).
Funding strategy: A $100K seed investment funds grant writing and business development to capture non-dilutive federal funding (USDA SBIR, DOI OIA, NIFA programs) while securing a city pilot with NRE contribution. Platform build is funded primarily through grants; Tethra retains full IP. Beyond biosecurity, the same platform architecture unlocks broader federal resilience programs (FEMA hazard mitigation, NOAA climate resilience, DoD dual-use) representing additional funding pools exceeding $100M annually. See Market Opportunity for detailed analysis.
A contract-operational system can be built in ~20 weeks for an estimated $600K–$1.0M. See Appendix I for build estimate.
Why Now
SkyGuard is possible because five technology areas have matured simultaneously:
- High-resolution aerial imaging with stable radiometric calibration
- RTK positioning enabling consistent georegistration across surveys
- Production photogrammetry producing reliable survey products
- Practical ML pipelines for segmentation and interpretable scoring
- Operational drone execution with detailed logging and compliance controls
No single technology is novel. The value is in the system architecture that integrates these components into an operational program with measurable outcomes.
What Success Looks Like
A successful pilot produces:
- Consistent coverage and processing latency
- Stable registry performance across survey cycles
- Measured detection accuracy (PPV/recall) on calibration samples
- Outcome reporting for treated cohorts with explicit denominators
- Evidence packets suitable for stakeholder accountability
SkyGuard does not promise eradication. It promises measurable progress with defensible methodology.
Market Opportunity
A $100K seed investment unlocks access to a $150M+ addressable market through strategic grant capture and government contract positioning.
Investment Thesis
The Pacific islands face an escalating biosecurity crisis. The Coconut Rhinoceros Beetle alone threatens $500 million to $1 billion in economic damage over the next decade, yet current spending on detection and response totals only $20-25 million annually—creating a massive gap that drone-based surveillance can help close.
The official Hawaii CRB Battle Plan (May 2025) explicitly calls for "mapping infestations using drones (with thermal imaging) and GIS technology"—a direct market signal. Meanwhile, USDA has announced it will end direct CRB Response funding, creating urgency for cost-effective alternatives.
Funding Strategy: How $100K Unlocks the Market
| Phase | Investment | Outcome | Risk Mitigation |
|---|---|---|---|
| Seed | $100K | Grant applications + City LOI + technical proof | Small bet; unlocks non-dilutive capital |
| Grant Capture | $500K-$1M+ (non-dilutive) | Platform development funding | Federal/state money; retain full IP |
| Pilot | City NRE contribution | Working system; reference customer | Customer co-investment = real demand signal |
| Production | Revenue-funded | Recurring contracts; geographic expansion | Proven system; referenceable results |
The Crisis Driving Demand
CRB has spread to every major Pacific jurisdiction. The virus-resistant CRB-G biotype defies traditional biocontrol.
| Jurisdiction | Status | Detection | Current Response |
|---|---|---|---|
| Oahu, Hawaii | Established | Dec 2013 | Suppression only |
| Kauai | Active, spreading | May 2023 | Eradication attempted |
| Big Island | Active, spreading | Oct 2023 | Eradication attempted |
| Maui County | Limited detection | Sep 2023 | Eradication attempted |
| Guam | Widespread | Sep 2007 | Suppression only |
| CNMI (Rota) | Established | Oct 2017 | Containment |
| Palau | Established | Pre-2016 | Biocontrol showing promise |
| Marshall Islands | State of emergency | Sep 2023 | FAO assistance active |
| FSM | CRB-free, high risk | N/A | Prevention/monitoring |
Market Sizing: TAM / SAM / SOM
Total Addressable Market: $150M+ annually
| Category | Annual Value | Source |
|---|---|---|
| Hawaii invasive species need | $50-57M | Legislative Reference Bureau |
| CRB economic damage potential | $50-100M/year | $500M-$1B over 10 years |
| Pacific territories (Guam, CNMI, Am. Samoa) | $5-10M | DOI OIA allocations |
| COFA states environment sector | $10-20M | Compact funding |
| Adjacent invasives (ROD, LFA, miconia) | $20-30M | Current program spend |
| TAM Total | $135-217M |
Serviceable Addressable Market: $21-35M annually
| Opportunity | Value | Funding Source |
|---|---|---|
| CRB detection/monitoring contracts | $5-10M | USDA APHIS, State, DOD |
| ROD aerial surveillance | $4-6M | Federal + State |
| HISC/ISC detection grants | $3-4M | State appropriations |
| DOI OIA biosecurity projects | $2-3M | TAP, CRNR programs |
| COFA state agriculture support | $3-5M | Compact environment sector |
| County-level contracts | $2-3M | Honolulu, Maui, Hawaii Co. |
| Federal SBIR/STTR technology | $2-4M | USDA, potential DoD |
| SAM Total | $21-35M |
Serviceable Obtainable Market: Years 1-3
| Year | Target Revenue | Sources |
|---|---|---|
| Year 1 | $500K-$1M | SBIR Phase I ($175K × 1-2), DOI OIA TAP ($100-250K), State pilot ($150-300K) |
| Year 2 | $1.5-2.5M | SBIR Phase II ($600K), Multi-county contracts ($500K-1M), COFA entry ($200-500K) |
| Year 3 | $3-5M | State production contract, Territory expansion, Adjacent use cases |
FY2026-2027 Funding Windows
Biosecurity-Focused Programs
| Program | Deadline | Amount | Fit |
|---|---|---|---|
| Hawaii State FY27 Budget | Jan-May 2026 | Variable | Legislative engagement window |
| USDA SBIR Phase II | Feb 2026 | $600K | Requires Phase I |
| USDA PPA 7721 | Jun-Jul 2026 | $50-200K | UH partnership recommended |
| DOI OIA Technical Assistance | Aug 2026 | $100-500K | Pacific territory partnerships |
| USDA SBIR Phase I | Sep-Oct 2026 | $175K | Topic 8.2 Plant Protection |
| NIFA SCRI Pre-application | Nov 2026 | Up to $2M/year | Requires academic partner |
State Technology Development
| Program | Amount | Fit |
|---|---|---|
| HTDC SBIR Matching Grants | $25-150K per award | Matches federal SBIR wins; Phase I ~$25K, Phase II ~$100K |
| Manufacturing Assistance Program | Up to $100K (20% of costs) | Hardware fabrication scale-up in Hawaii |
| Hawaii Tech Bridge (ONR) | Variable | Dual-use technology validation |
HTDC awarded $1.5M to 14 companies in the most recent cycle. SkyGuard's federal SBIR wins would automatically qualify for state matching funds.
Current Hawaii Funding Context
Hawaii's biosecurity spending has reached historic levels:
- Department of Agriculture renamed to Department of Agriculture and Biosecurity (DAB)
- Operating budget increased from $59M to $71M annually (20% increase)
- $10M in new biosecurity initiatives (Act 231, SLH 2024)
- $26M for biosecurity staffing
- HISC funding at highest level since 2003 (~$10M)
Critical transition: USDA ending direct CRB Response funding (~$2M/year + $350K canine support). State HB643 pending to fill gap—creating urgency for cost-effective detection alternatives.
Adjacent Revenue Streams
The same surveillance infrastructure supports multiple aerial-detectable targets:
| Target | Detection Method | Market Size | Geographic Scope |
|---|---|---|---|
| Rapid ʻŌhiʻa Death | Canopy dieback (proven) | $4M+/year proposed | All islands, Big Island focus |
| Miconia | Purple undersides visible | Part of $866M combined | All islands |
| Albizia | Distinctive canopy | $13M+ storm damage | All islands |
| Little Fire Ant habitat | Treatment verification | $6.8B over 35 years | Big Island, Maui, Oahu |
| Feral pig impacts | Thermal + habitat damage | Included in fencing | All islands |
Total invasive species economic impact: Miconia + Little Fire Ant alone cause $866 million annually in losses.
Geographic Expansion Pathway
| Phase | Timeline | Markets | Revenue Model |
|---|---|---|---|
| Phase 1 | Year 1-2 | Oahu pilot → multi-county Hawaii | Direct service delivery |
| Phase 2 | Year 2-3 | Guam, CNMI (US territories) | Platform + local operators |
| Phase 3 | Year 3-5 | COFA states (Palau, FSM, RMI) | Processing-as-a-service |
| Phase 4 | Year 4+ | DoD installations, tourism sector | Enterprise contracts |
COFA States: Distinct Funding Pathway
The renewed Compact of Free Association (signed March 2024) provides $7.1 billion over 20 years with Environment as a designated sector grant category:
- $8.5M via USDA for FAS urban forestry and invasive species
- $250M FSM, $200M RMI, $50M Palau in mandatory trust fund contributions (FY2024)
- Access via Joint Economic Management Committees—requires local government partnerships
- Partner institutions: College of Micronesia-FSM, Palau Community College, College of Marshall Islands
Private Sector Opportunities
While the initial go-to-market is government-focused, significant private sector demand exists for palm asset protection:
- Resorts and hotels: Waikiki properties depend on palm aesthetics for tourism appeal; CRB damage directly impacts property value and guest experience
- Golf courses: Hawaii's 80+ courses maintain extensive palm landscapes; several have already lost trees to CRB
- Agricultural estates: Coconut and ornamental palm operations face direct crop losses
- Military installations: Pearl Harbor, Schofield Barracks, and other DoD facilities maintain palm landscapes and have dedicated grounds budgets
Private sector contracts offer revenue diversification beyond government budget cycles and could be pursued in parallel once the municipal pilot demonstrates capability. These customers typically have faster procurement cycles and may pay premium rates for priority service.
Accelerator & Non-Dilutive Investment
| Program | Amount | Fit |
|---|---|---|
| Elemental Excelerator | $200-500K non-dilutive | Climate resilience, environmental monitoring; Honolulu-based with strong local network |
| Blue Startups | $50-100K + mentorship | Hawaii-focused accelerator; tech commercialization |
| Hawaii Community Foundation | $25-100K grants | Environmental and conservation grants; community engagement components |
Elemental Excelerator (formerly Energy Excelerator) is particularly aligned—they invest in climate, water, agriculture, and "nature" technology solutions. Applications accepted on rolling basis with annual cohort selection. SkyGuard's focus on climate resilience and ecosystem protection makes it a strong candidate.
Beyond Biosecurity: Expansion Funding Pathway
CRB response is the beachhead, but SkyGuard's platform architecture supports broader environmental resilience applications. Once the biosecurity use case is validated, additional federal and state programs become accessible:
Federal Resilience Programs
| Program | Agency | Amount | Fit |
|---|---|---|---|
| Hazard Mitigation Grant Program (HMGP) | FEMA | $100K-$2M+ | Post-disaster early warning systems; 75% federal / 25% local match |
| Homeland Security Grant (UASI) | DHS/FEMA | $5-6M Hawaii allocation | Surveillance tech for emergency response; requires agency partner |
| Climate Resilience Regional Challenge | NOAA | $5-50M | Coastal hazard monitoring, ecosystem protection; Hawaii received $68.5M in 2024 |
| Transformational Habitat Grants | NOAA Fisheries | $750K-$10M | Ecosystem restoration with climate resilience benefits |
| Defense Research (ONR/SERDP) | DoD | Variable | Dual-use environmental monitoring for Pacific installations |
Expansion Use Cases
The same drone fleet, imaging sensors, and AI analytics that detect CRB damage can be adapted for:
- Wildfire early detection: Thermal imaging for ignition detection in high-risk areas
- Coastal erosion monitoring: Repeat surveys for shoreline change analysis
- Post-disaster damage assessment: Rapid aerial surveys for FEMA/insurance documentation
- Coral reef health monitoring: Coastal ecosystem surveillance
- Infrastructure inspection: Utility corridors, bridges, remote facilities
Strategic sequencing: Biosecurity validation (Years 1-2) → Adjacent environmental use cases (Years 2-3) → Broader disaster resilience (Years 3-5). Each stage unlocks larger funding pools while building operational credibility.
Why Now
- Policy tailwinds: Hawaii tripled biosecurity funding; department renamed to emphasize biosecurity
- Funding transition: USDA ending direct CRB funding creates service vacuum
- Technology validation: CRB Battle Plan explicitly calls for drone thermal imaging
- Proven detection: ROD drone detection already validated at UH Hilo
- Market timing: Agricultural drone market growing 25%+ CAGR
- Data gap opportunity: No comprehensive palm inventory exists for Oahu—SkyGuard creates foundational asset
- Relationship access: Tethra has direct line to Mayor's office and Chief of Staff
See Appendix I for detailed build estimate and beachhead customer profile.
The Problem
Why CRB is a Hard Biosecurity Problem
The Coconut Rhinoceros Beetle presents a structural challenge that current response architectures are not designed to solve.
The detection delay problem: Observable frond damage—the characteristic V-cuts that trigger human recognition—typically appears well after initial infestation. By the time damage is visible, beetles have already bred and dispersed. Programs that respond to visible damage are always fighting the previous generation's spread.
Note: SkyGuard enables earlier detection than complaint-driven discovery by systematically surveying the full footprint rather than waiting for reports. This is not a claim of pre-symptomatic detection—imaging still requires visible indicators, but systematic coverage finds them sooner than reactive programs.
The coverage problem: Beetles are mobile and opportunistic. Breeding sites include mulch piles, green waste, dead palms, and other decaying organic matter spread across diverse land uses. No feasible inspection program can monitor every potential site continuously.
The verification problem: Traditional programs treat palms and assume effectiveness. Without systematic outcome tracking, there is no way to determine whether treatments are actually reducing infestation pressure or merely consuming resources.
Why "Treating Trees" Fails at Scale
A tree-centric workflow—inspect, confirm, treat—has predictable failure modes:
Coverage Gaps
Inspectors cannot be everywhere. A 30,000–70,000 palm footprint with monthly coverage would require an estimated 6-15 full-time inspectors just to maintain pre-treatment confirmation. (Illustrative: assumes ~150-200 palms/inspector/day, 20 working days/month, including travel and documentation time.) Activity spreads in the uninspected gaps.
No Persistence
Without stable palm identifiers, there is no way to link observations over time. Each survey is an isolated snapshot. Treatment history, outcome trajectories, and longitudinal patterns are invisible.
No Verification
When a palm is treated, the program cannot systematically determine whether the treatment worked. "Success" is asserted rather than measured.
No Population Intelligence
Tree-centric programs cannot answer population-level questions: Where is activity concentrating? Is prevalence increasing or decreasing? What is the program's actual effectiveness rate?
What a Modern Program Requires
| Capability | Requirement |
|---|---|
| Systematic surveillance | Repeatable coverage, not complaint-driven sampling |
| Persistent identity | Observations and actions accumulate per palm over time |
| Governable decisions | Threshold-based targeting with explicit tradeoffs |
| Verified outcomes | Measurable treatment effectiveness with denominators |
| Operational auditability | Traceable decisions for stakeholder defense |
Evidence Base
Independent research validates both the severity of Hawaii's CRB crisis and the technical feasibility of SkyGuard's approach. This section presents third-party evidence with citations.
The Crisis Has Escalated Despite Intensive Response
CRB was first detected on December 23, 2013 at Joint Base Pearl Harbor-Hickam. Despite a decade of response efforts and millions in federal funding, the beetle has spread to four major islands:
| Island | First Detection | Current Status (2025) |
|---|---|---|
| Oahu | December 2013 | Established; eradication abandoned |
| Kauai | May 2023 | Active response |
| Hawaii Island | October 2023 | Spreading from Waikoloa to Kona |
| Maui | November 2023 | Active response |
Sources: Hawaii DAB CRB Information; Governor's Office Press Release
Hawaii's CRB population is the CRB-G (Guam biotype)—resistant to the Oryctes nudivirus that controlled CRB elsewhere in the Pacific. CRB-G "hot spots" cause 50-90% palm damage versus 10-50% for susceptible strains.
Source: Pacific Community: New Biotype Discovery
Official Documents Acknowledge Program Limitations
"Detections on Maui, Kauaʻi, and Hawaiʻi in 2023 indicate our ability to eradicate CRB from the island of Oʻahu is no longer feasible."
— Hawaii Department of Agriculture, December 2024
The CRB Response program's own documentation reveals structural constraints:
- Limited geographic scope: "Our program is primarily funded to manage populations near ports on Oahu and around high-risk commodities being shipped to other islands."
- Capacity constraints: "Limited capacity to inspect, treat, and perform outreach on other islands... travel off Oahu is limited."
- No regulatory authority: The program "does not have the authority to quarantine, make or enforce rules, or perform treatments without landowner authorization."
Source: CRB Hawaii Response Program
HDOA's December 2024 biosecurity report noted only 13 CRB surveys were conducted on Oahu over a 12-month period, with "staffing shortages and current budgetary limitations" forcing focus on "maintaining projects versus active testing."
The CRB Battle Plan Calls for Drone Surveillance
The official May 2025 CRB Eradication Battle Plan prepared for the Hawaii Invasive Species Council explicitly recommends drone-based detection:
"Map infestations using community reports, drones (with thermal imaging), and GIS technology. Focus on target hotspots first for efficient management."
— CRB Battle Plan, Step 1: Surveillance and Mapping
Source: HISC CRB Battle Plan Presentation (PDF)
Drone Detection Technology Is Validated
UH Hilo's Spatial Data Analysis and Visualization Laboratory has validated drone-based tree disease detection in Hawaiian conditions:
| Application | Accuracy | Method |
|---|---|---|
| Rapid Ohia Death detection | 79-100% | Multispectral imaging (MicaSense sensors) |
| Miconia detection (AI-assisted) | 83% recall | Deep convolutional neural networks |
| Forest health monitoring | <6.8% error | Hyperspectral at 7m² threshold |
Sources: UH Hilo: Ryan Perroy ROD Research; Remote Sensing (2020)
Dr. Ryan Perroy's team won the $70,000 National Park Service 'Ōhi'a Challenge in 2019 for their drone-based ROD detection approach. The same platform architecture applies directly to CRB damage detection.
State Biosecurity Funding Has Tripled
Hawaii's biosecurity investment reached unprecedented levels in 2024-2025:
| Legislation | Amount | Key Provisions |
|---|---|---|
| Act 231 (2024) | $10M | Targeted biosecurity initiatives; $1.2M for CRB (drones, chemicals, nets) |
| Act 236 (2025) | $8.5M | Department renamed to Agriculture & Biosecurity; HISC funding |
| Act 250 (2025) | $26.6M | 44 new permanent biosecurity positions |
Total state biosecurity funding increased from $6.2M to $19.2M annually—more than tripling since FY2023.
Sources: DAB Biosecurity Legislation; Spectrum News
Economic Damage Projections Are Severe
| Threat | Projected Impact | Timeframe |
|---|---|---|
| CRB | $500M – $1B | Next decade |
| Little Fire Ant | $1.2B treatment costs | Cumulative |
| Rapid Ohia Death | 1M+ trees killed | To date (270,000 acres) |
| Miconia + LFA combined | $866M annually | Ongoing |
Sources: HISC CRB Battle Plan; Earth.Org; ScienceDirect: LFA Economic Analysis
Current Detection Is Reactive, Not Systematic
Every new island detection occurred after beetles had already established breeding populations:
- Kauai: First detection at green waste transfer station where beetles were already present
- Hawaii Island: Discovered when a resident reported grubs in a decaying palm stump
- Maui: Detected in nursery products after interisland spread
HDOA acknowledged interim rules were "launched too late to contain the spread of CRB on the island of Oahu." The 20-month delay on rule changes allowed continued interisland spread.
Pheromone traps—the primary detection tool—have documented limitations. HDOA states traps are "used for early detection of CRB infestations and are not effective in reducing populations." Dr. Aubrey Moore (University of Guam) noted at the 2025 science symposium that "pheromone trap catch may not be correlated with population density."
The Gap SkyGuard Fills
| Current Limitation | SkyGuard Capability |
|---|---|
| 13 surveys/year on Oahu | Systematic coverage on defined cadence |
| Port-focused, limited neighbor island capacity | Multi-island platform deployment |
| Reactive detection after establishment | Proactive damage pattern identification |
| No outcome verification | Persistent tracking with measured effectiveness |
| No palm inventory exists | Registry with stable identifiers |
| Trap catch ≠ population data | Population-level intelligence |
Adjacent Threats Multiply Platform Value
The same aerial surveillance infrastructure addresses multiple biosecurity threats:
Rapid Ohia Death (ROD)
Has killed over 1 million ʻōhiʻa trees across 270,000+ acres on Hawaii Island, with confirmed infections on Kauai and Oahu. Aerial detection is already validated—Carnegie Airborne Observatory research demonstrated that 80% of ROD-infected plants show detectable decreases in foliar chlorophyll and water content.
Little Fire Ant (LFA)
Over 4,000 infested locations on Hawaii Island. Economic analysis projects $1.2 billion in treatment costs. While direct aerial detection of ants is challenging, drones enable treatment delivery and habitat surveillance.
Miconia
UH Hilo research achieved 83% recall using AI-assisted drone imagery—outperforming human analysts (50-74% detection). The distinctive bicolored leaves provide clear aerial signatures.
Bottom line: Independent sources confirm that current CRB response is failing at scale, official plans call for drone-based surveillance, the technology is validated, and funding is available. SkyGuard addresses every documented gap.
The Solution
What SkyGuard Is
SkyGuard is a precision biosecurity operations platform. It combines repeatable aerial imaging with automated analysis to produce treatment queues and verified outcomes.
The system does not track individual beetles. It tracks infestation indicators at the palm level and produces population-level intelligence that is operationally useful and programmatically defensible.
Core Operating Loop
Sense
Repeatable aerial surveys capture high-resolution RGB imagery (multispectral optional) on a defined cadence. RTK positioning ensures consistent georegistration across survey cycles.
Analyze
Survey imagery flows through an automated pipeline: Palm crown segmentation creates or updates registry entries. Multi-modal detection extracts damage and stress indicators. Fusion model produces calibrated risk scores per palm. Risk scores are explainable via feature attribution.
Treat
Treatment targeting is threshold-based: Palms above the configured risk threshold enter a treatment queue. Nearby palms are clustered into treatment zones. Mission packages are generated with compliance constraints. Human operators approve batches and licensed applicators execute.
Verify
Follow-up imaging enables outcome measurement: Treated palms are re-observed in T+30, T+60, and T+90 windows (±5-10 days based on survey cadence). Composite and risk-based outcome scores classify response. Confounders (drought, storm, other pests) are flagged and excluded. Outcome distributions are reported with explicit denominators.
Learn
Calibration sampling and verified outcomes close the loop: Independent inspections measure detection accuracy (PPV/recall). Outcome data informs model recalibration. Threshold governance responds to measured performance.
Human-in-the-Loop Governance
SkyGuard is explicitly not autonomous. Humans retain authority at key control points:
| Control Point | Role | Authority |
|---|---|---|
| Threshold configuration | Program Manager | Sets precision/recall operating point |
| Batch approval | Program Manager | Reviews and releases treatment missions |
| Field execution | Licensed Applicator | Safety abort; obstacle discretion |
| Model updates | Data Lead | Approves retraining and recalibration |
| Stop conditions | Multiple | Mandatory halts for performance breaches |
AI Query Interface (Phase 2)
SkyGuard's architecture supports a natural language query layer powered by a local LLM, enabling operators to interact with program data conversationally and generate narrative summaries on demand. This capability is planned for Phase 2 deployment after core pipeline validation.
How It Works
The LLM is grounded in actual data—it queries the live registry, risk scores, and treatment history, then formats results into functional narrative summaries. No external API calls; all processing stays on-prem.
Example Queries
| User Asks | System Returns |
|---|---|
| "What are the highest-risk zones this week?" | Ranked zone list with counts and avg scores |
| "Summarize December program status" | Coverage, treatment, outcome metrics in prose |
| "Why did palm abc-123 score so high?" | SHAP factors formatted as narrative |
| "Compare Zone A vs Zone B outcomes" | Side-by-side outcome distributions |
| "Treatment history for Kailua sector" | Timeline summary with key stats |
| "Draft a Q4 progress update" | Aggregated metrics formatted for stakeholders |
Design Principles
- Grounded responses: All answers derived from database queries, not model training
- Functional narrative: Clear summaries optimized for operational use, not prose quality
- On-prem processing: Local LLM keeps data in-house, no external API dependencies
- Domain-tuned: Fine-tuned on CRB/biosecurity vocabulary and report formats
- Query restrictions: Text-to-SQL limited to SELECT operations on authorized views; no data modification permitted via natural language interface
Note: Core detection, treatment, and outcome tracking functionality operates independently of the AI query interface. The LLM layer enhances usability but is not on the critical path for program operations.
What SkyGuard Is Not
| SkyGuard Is | SkyGuard Is Not |
|---|---|
| Systematic surveillance at scale | Dependent on per-palm manual inspection |
| Persistent registry with longitudinal tracking | One-time damage survey |
| Population-level intelligence | Individual beetle tracker |
| Threshold-based targeting with governance | Case-by-case human authorization |
| Verified outcomes with denominators | Assumed treatment effectiveness |
| Bounded claims with confidence intervals | "X% accuracy" without methodology |
Business Model & Moat
Why This Is a Business
A detection model is not a product. The market value exists where:
- Decisions translate into field operations
- Outcomes are verified and reported
- Performance is measured and defended
- The system improves with operational data
SkyGuard's product is the operating system for biosecurity response—not a point solution, but an integrated platform that captures value across the entire operational loop.
Revenue Model
Platform Subscription (Annual)
Recurring revenue for core platform access: Palm registry and risk scoring engine, program dashboards and reporting, audit trail and evidence packet generation, model monitoring and calibration tooling, governance tooling.
Operations Services (Variable)
Usage-based revenue scaling with program footprint: Survey operations (per survey or per acre), processing and analysis (per survey cycle), mission packaging and reconciliation (per batch), verification reporting.
Professional Services (As-Needed)
High-value services for program maturity: Calibration sampling coordination, compliance documentation, model customization, training.
Competitive Moat
1. Data Moat: Longitudinal Registry + Outcomes
SkyGuard creates a unique dataset that cannot be quickly replicated: persistent palm IDs across survey cycles, linked treatments and verified outcomes, calibration labels with known denominators, temporal patterns revealing population dynamics.
2. Defensibility Moat: Auditability + Evidence Packets
Municipal programs face stakeholder scrutiny. SkyGuard's value increases when accountability is demanded: every decision is traceable to inputs and rules, immutable logs prevent post-hoc modification, evidence packets support per-palm defense.
3. Governance Moat: Decision Contract
SkyGuard's threshold-based decision contract makes the program: Tunable (operators adjust the precision/recall tradeoff), Consistent (same inputs produce same outputs), Reviewable (decision rules are explicit and auditable), Transferable (staff turnover doesn't break institutional knowledge).
4. Execution Moat: Operations Playbook
Scaled biosecurity programs are won or lost on execution: repeatable survey SOPs with QA gates, mission packaging with compliance constraints, exception handling with documented escalation, verification protocols with outcome classification.
5. Relationship Moat: Early Adopter Access
The existing relationship with City & County of Honolulu provides: reduced time-to-pilot, real operational context for development, credibility with adjacent stakeholders, reference customer for expansion.
Unit Economics (Illustrative)
At pilot scale (~5,000 acres, estimated 30,000–70,000 palms, monthly surveys):
| Cost Component | Traditional | SkyGuard |
|---|---|---|
| Inspection labor | 6-20+ FTE | ~0.5 FTE (calibration only) |
| Survey operations | N/A | 2-3 FTE |
| Treatment operations | 4-6 FTE | 2-3 FTE |
| Program management | 2-3 FTE | 1-2 FTE |
| Verification | None | Included in survey |
| Total | 12-30+ FTE | 6-9 FTE |
Note: Traditional FTE range reflects uncertainty in palm count. SkyGuard economics are relatively stable because survey costs scale with acreage, not palm count.
Roadmap & Risks
Validation Pathway
SkyGuard follows a pilot-gated roadmap. Each phase produces specific evidence artifacts before advancing.
Phase 0: Bench Validation (4-8 weeks)
Goal: Demonstrate pipeline reproducibility on representative Hawaiʻi imagery.
| Milestone | Evidence |
|---|---|
| Survey SOP established | Documented altitude/speed/overlap windows |
| Pipeline reproducibility | Same outputs for same inputs (within tolerance) |
| Registry schema validated | Stable IDs, observation linking, spatial indexing |
| Baseline model performance | Published metrics on available labeled data |
Phase 1: Pilot Launch (Month 1-3)
Goal: Establish operational tempo and baseline program metrics.
Phase 2: Verification Emergence (Month 4-6)
Goal: Begin outcome measurement with first T+90 cohorts.
Phase 3: Scale Proof (Month 7-12)
Goal: Use outcome data to recalibrate and prove scalability.
Key Risks and Mitigations
| Risk | Impact | Mitigation |
|---|---|---|
| Model drift / domain shift | Detection accuracy degrades | Continuous calibration monitoring; threshold governance; retraining plan |
| Registry ID instability | Longitudinal tracking breaks | Multi-signal matching; stability KPIs; manual correction workflow |
| Coverage gaps | Verification denominators incomplete | Gap detection; fill planning; missingness tracking |
| Applicator compliance | Regulatory exposure | Credential checks; mission constraints; immutable logs |
| Geographic expansion | Models trained on Hawaii may underperform in new regions (Guam, Palau, etc.) due to different palm varieties, lighting, vegetation | Calibration datasets from new locales before deployment; transfer learning approaches; local expert labeling partnerships |
| Operational scaling | Field operations (pilots, ground crews) may not scale as fast as contracts | Standardized SOPs; training programs; local operator partnerships in expansion territories |
Claims SkyGuard Will Not Make
| We Will Say | We Will Not Say |
|---|---|
| "PPV of X% (95% CI: Y-Z%) on calibration sample" | "X% accuracy" without methodology |
| "Outcome improvement in N% of treated cohort" | "Eradicated CRB in treated area" |
| "Human-supervised automated targeting" | "Autonomous treatment" |
Success Criteria
| Dimension | Criterion |
|---|---|
| Coverage | ≥95% of planned footprint per cycle |
| Processing | <24 hours latency (P95) |
| Registry | ≥95% ID stability (3+ surveys) |
| Detection | PPV >50%, Recall >85% on calibration sample |
| Verification | <15% missingness |
| Auditability | Evidence packets complete for all treated palms |
Appendix A: Canonical Definitions
These definitions are authoritative throughout the SkyGuard system. Terms are defined once here and not redefined elsewhere.
Core Entities
Palm
A single palm tree instance in the registry with: palm_id (stable UUID), geometry (crown polygon), first_observed, status (Active | Removed | Merged | Split).
Survey
A complete imaging collection over a defined footprint with: survey_id, footprint, captured_at, sensor_config, qa_status.
Risk and Decision
Risk Score
A calibrated likelihood score in [0.0, 1.0] representing the probability of active CRB damage requiring treatment. Generated by the fusion model from imaging-derived features. Calibrated on the monthly Stream A Audit sample; calibration is monitored for drift and recalibrated as needed.
Treatment Threshold
The risk score cutoff above which palms enter the treatment queue. Operator-configurable. Default: 0.6.
Holdout Period
Minimum time after treatment before a palm can be re-queued. Default: 90 days.
Detection Metrics
Positive Predictive Value (PPV)
The proportion of palms flagged as above-threshold that are confirmed positive by Level 4A ground-truth (RCI or physical inspection). Computed on the monthly Stream A Audit sample.
PPV = True Positives / (True Positives + False Positives)
Recall (Sensitivity)
The proportion of truly infested palms (confirmed by Level 4A ground-truth) that the model correctly flagged as above-threshold. Computed on the monthly Stream A Audit sample.
Recall = True Positives / (True Positives + False Negatives)
Ground-Truth Definition
A palm is "truly infested" if Level 4A verification (RCI or physical inspection with evidence package) confirms active CRB damage. Stream A Audit sample provides the denominator for both metrics.
Evidence Levels
| Level | Name | Evidence | Use |
|---|---|---|---|
| 0 | Registered | Stable ID in registry | Baseline existence |
| 1 | Flagged | Single detection signal | Early warning |
| 2 | Above-Threshold | Fusion score ≥ threshold | Treatment queue eligibility |
| 3 | Trap-Correlated | Within active trap capture zone | Spatial validation |
| 4A | Close Inspection Verified | RCI or physical inspection with evidence package | PPV/recall denominators |
| 4B | Screening Observed | Binocular/roadside only | Sentinel trending (excluded from denominators) |
| 5 | Lab Verified | Genetic/taxonomic confirmation | Scientific corroboration |
Outcome Categories
| Category | Criteria | Interpretation |
|---|---|---|
| Improved | Δcomposite > +0.15 AND no new V-cuts | Treatment effective |
| Stable | Δcomposite -0.10 to +0.15 | Inconclusive |
| Declined | Δcomposite < -0.10 OR new V-cuts | Possible failure |
| Confounded | Non-CRB stressor identified | Excluded from effectiveness |
| Lost | Palm not re-imaged | Missing data |
Delta Conventions
Δrisk = risk_T+90 - risk_pre (Negative = improvement)
Δcomposite = weighted combination (Positive = improvement)
Interpretation: Lower risk scores are better (reduced infestation likelihood). Higher composite scores are better (improved overall health). The sign conventions are opposite because risk measures "badness" while composite measures "goodness."
Appendix B: Technical Architecture
This appendix describes the system architecture at a level suitable for engineering estimation and implementation planning.
Design Principles
- Geospatial-first: Store and index data in PostGIS; design pipelines around spatial features
- Immutable, versioned artifacts: Each run (survey, model, risk scoring) must be reproducible and traceable
- Closed-loop governance: Program managers set thresholds; licensed applicators execute; all actions are auditable
- Evidence and metrics: Report denominators and missingness; avoid over-claiming
- Separation of concerns: Keep sensing, processing, scoring, mission packaging, execution and verification modular
System Topology
Data Model (Core Tables)
-- Palm Registry
palm (palm_id UUID PK, geometry POLYGON, centroid POINT,
first_observed TIMESTAMP, last_observed TIMESTAMP,
status VARCHAR, cultivar VARCHAR, height_m FLOAT)
-- Survey Metadata
survey (survey_id UUID PK, footprint POLYGON, captured_at TIMESTAMP,
sensor_config JSONB, processing_ver VARCHAR, qa_status JSONB)
-- Per-Palm Observations
observation (obs_id UUID PK, palm_id FK, survey_id FK,
features JSONB, risk_score FLOAT, shap_factors JSONB)
-- Treatment Operations
treatment_queue (queue_id UUID PK, threshold FLOAT, model_version VARCHAR)
mission (mission_id UUID PK, batch_id UUID, targets JSONB, route JSONB,
constraints JSONB, approved_by VARCHAR, approved_at TIMESTAMP)
execution_log (log_id UUID PK, mission_id FK, flight_track LINESTRING,
events JSONB, weather_snap JSONB, exceptions JSONB)
-- Verification & Audit
verification (palm_id FK, treatment_id UUID, t30/t60/t90_score FLOAT,
outcome_class VARCHAR, composite_delta FLOAT)
audit_event (event_id UUID PK, timestamp, actor, action, object_id,
payload JSONB, prev_hash VARCHAR, event_hash VARCHAR)
Pipeline Stages (Detailed)
Stage 1: Ingest
- Upload raw frames to object storage with lifecycle rules
- Extract EXIF/XMP metadata (GPS, camera settings)
- Compute per-frame quality metrics (blur, exposure, coverage)
- Build spatial footprint index
- Record ingestion event with hash in audit log
Stage 2: Photogrammetry
- Invoke containerized engine (Pix4D/ODM)
- Generate orthomosaic with RTK georegistration
- Extract CHM from structure-from-motion
- Compute reflectance products (if MS enabled)
- QA gates: coverage ≥95%, blur score, registration RMSE
- Attestation report with pass/fail stored in event log
Stage 3: Palm Detection
- Tile orthomosaic into 512×512 chips
- Run U-Net segmentation (EfficientNet-B4 encoder)
- Post-process: threshold, connected components, polygon simplification
- Filter by area (2-100 m²) and circularity (>0.4)
- Merge overlapping polygons across tile boundaries
Stage 4: Damage Classification
- Extract 256×256 crown crops per detected palm
- Run V-cut CNN classifier (ResNet-50 backbone)
- Escalation: if confidence 0.4-0.8, retrieve raw frames for re-eval
- Compute spectral indices (NDVI, NDRE, GNDVI, SAVI)
- Aggregate per-crown statistics (mean, std, percentiles)
Stage 5: Risk Scoring
- Assemble feature vector (~50 features): damage, spectral, temporal, spatial, structural
- Run XGBoost inference
- Apply isotonic calibration for probability alignment
- Compute SHAP values for top-5 feature attribution
- Store scores and explanations in observation table
Stage 6: Registry Matching
- Spatial proximity matching (configurable threshold: 2-5m)
- Shape signature comparison (Fourier descriptors)
- Neighborhood graph consistency check
- Height consistency validation from CHM
- Handle merge/split cases with confidence scoring
- Low-confidence matches flagged for manual review
Stage 7: Mission Generation
- Filter: score ≥ threshold AND outside holdout period
- Cluster nearby targets (DBSCAN or grid-based)
- Route optimization (OR-Tools / OSRM)
- Attach constraint snapshot (geofences, weather limits, credentials)
- Generate mission JSON with schema version
- Hold for program manager batch approval
Stage 8: Execution & Reconciliation
- Applicator uploads flight track + application events via API/CLI
- Spatial join: planned targets vs. executed positions
- Compute success rate, flag deviations
- Exception codes: ACCESS_DENIED, OBSTRUCTION, SAFETY_ABORT, WEATHER_ABORT
- Push missed targets back to queue or schedule manual inspection
Stage 9: Verification
- Schedule T+30/60/90 follow-up surveys automatically
- Run change detection: pre vs. post imagery
- Compute risk delta and composite delta
- Apply outcome classification rules
- Flag confounders (drought, storm, pruning, other pests)
- Generate evidence packets per treated palm
Stage 10: Model Maintenance
- Stratified sampling for ground-truth acquisition
- Compute PPV, recall, calibration curves on sample
- Monitor feature and score distribution drift
- Automated retraining when drift exceeds threshold
- Canary deployment: test new model on subset before rollout
- Threshold adjustment guidance based on PPV/recall tradeoffs
Infrastructure Decision Trees
Cloud vs. On-Prem
| If... | Then... |
|---|---|
| Data sovereignty, offline operation, or predictable costs are critical | On-prem: MinIO + self-hosted PostGIS |
| Agility, scalability, less maintenance preferred | Cloud: AWS S3, RDS, managed Kafka |
RGB vs. Multispectral
| If... | Then... |
|---|---|
| Budget allows and early detection sensitivity is paramount | Procure MS cameras from day one |
| Budget constrained or proving concept first | Start RGB, gather baseline, phase in MS later |
Photogrammetry Software
| If... | Then... |
|---|---|
| Budget and support available | Commercial: Pix4D or Agisoft Metashape |
| Cost-constrained, engineering capacity available | Open-source: OpenDroneMap |
Orchestration Tool
| If... | Then... |
|---|---|
| Modern Pythonic interface, easier development | Prefect or Dagster |
| Enterprise standards or existing Airflow infra | Apache Airflow |
Security & Compliance Controls
- Authentication: OAuth 2.0 / JWT for API access
- RBAC: Program Manager, Data Lead, Field Operator roles
- Encryption: TLS for transit, AES-256 for storage
- Audit Trail: Hash-chained event log (Merkle tree optional)
- Credential Checks: Applicator license validation before mission release
- Data Retention: Raw imagery 12-24 months; processed data 5+ years; audit logs indefinite
AI Query Interface Architecture (Phase 2)
SkyGuard's architecture supports a local LLM for natural language data queries and narrative summary generation. This capability is planned for Phase 2 deployment after core pipeline validation.
System Flow
Components
| Component | Technology | Function |
|---|---|---|
| LLM Engine | Local model (Llama 3, Mistral, or fine-tuned) | Intent parsing, narrative generation |
| Query Builder | Text-to-SQL from intent | Translate NL to database queries |
| Data Grounding | Query results injection | Ensure outputs reflect actual data |
| Report Formatter | Templates + LLM fill | Structured output generation |
Security Constraints
- Query restrictions: Text-to-SQL limited to SELECT operations only; no INSERT/UPDATE/DELETE permitted
- Authorized views: LLM queries execute against restricted database views, not raw tables
- Audit logging: All LLM-generated queries logged with user context
Note: Core pipeline functionality operates independently. LLM interface enhances data accessibility but is not a critical path dependency.
Scaling Considerations
The architecture is designed to scale horizontally at key bottleneck points:
Photogrammetry (Stage 2)
Large surveys can be partitioned into spatial blocks for parallel processing. OpenDroneMap supports distributed mode; Pix4D offers cluster licensing. For Phase 1 pilot (~5,000 acres), single-node processing is sufficient. Parallelization path available when survey volume or frequency increases.
ML Inference (Stages 3-5)
Tile-based processing enables horizontal scaling across multiple GPU workers. The orchestrator can dispatch tiles to a worker pool and aggregate results. Current cost estimate assumes single GPU with burst capacity; additional workers can be added linearly as needed.
Database
PostGIS supports read replicas for query load distribution. Observation table can be partitioned by survey_id or time range if row counts exceed millions. Spatial indices on geometry columns are critical for registry matching performance.
Phase 1 target: ~5,000 acres (estimated 30K–70K palms), monthly surveys, <24h processing. Architecture supports 5-10× scale with horizontal additions, no redesign required.
Backup and Recovery
| Component | Backup Strategy | Retention |
|---|---|---|
| PostGIS Database | Daily automated snapshots + continuous WAL archiving | 30 days snapshots, 7 days WAL |
| Object Storage (raw imagery) | Cross-region replication (cloud) or RAID + offsite (on-prem) | Indefinite (raw data is source of truth) |
| Model Registry | Versioned artifacts in object storage | All versions retained |
| Audit Log | Append-only with daily export to cold storage | Indefinite (compliance requirement) |
| Configuration | Infrastructure-as-code in version control | Full git history |
Recovery Targets
- RPO (Recovery Point Objective): <1 hour for database, <24 hours for imagery
- RTO (Recovery Time Objective): <4 hours for full system restore from backups
Cost Estimation (~5,000 acres, monthly surveys)
Note: This estimate covers cloud infrastructure only, excluding software licenses. Infrastructure costs scale primarily with survey acreage, not palm count. See Appendix G for fully-loaded estimate including Pix4D, Mapbox, and higher storage assumptions.
| Component | Specification | Monthly Cost |
|---|---|---|
| Compute (API + workers) | 4× c5.xlarge equivalent | $600 |
| GPU (ML inference) | 1× g4dn.xlarge (burst) | $300 |
| Database | db.r5.large (PostGIS) | $250 |
| Storage | 2TB S3 + 100GB EBS | $100 |
| Event streaming | Managed Kafka (if used) | $150 |
| Network | Moderate egress | $50 |
| Total Infrastructure | ~$1,450/month |
Does not include survey flight operations, treatment operations, software licenses, or personnel.
Appendix C: Detection Pipeline
This appendix details the sensing, segmentation, and scoring components of the SkyGuard detection system.
Survey Specifications
RGB Imaging
| Parameter | Specification | Rationale |
|---|---|---|
| GSD | 2-3 cm | Sufficient for V-cut detection |
| Overlap | 80% front, 70% side | Enables SfM and frame redundancy |
| Altitude | 60-80m AGL | Balance GSD vs. coverage rate |
| Speed | 8-12 m/s | Minimize motion blur |
| Sun angle | >30° | Reduce harsh shadows |
Multispectral (Optional Phase)
| Parameter | Specification | Rationale |
|---|---|---|
| Bands | R, G, B, RE, NIR | Standard vegetation indices |
| GSD | 5-8 cm | Spectral signal priority |
| Calibration | Reflectance panel pre/post | Temporal comparability |
Palm Crown Segmentation
Data Annotation
- Select representative orthomosaics across zones and conditions
- Label palm crowns using QGIS or dedicated labeling platform
- Target: 12,000+ annotated crowns for initial training
- Include edge cases: overlapping crowns, shadows, partial visibility
Model Architecture
Primary model: U-Net with EfficientNet-B4 encoder
| Component | Specification |
|---|---|
| Input | 512×512 RGB tiles, 2-3 cm GSD |
| Output | Binary mask (crown/background) |
| Augmentation | Rotation, flip, color jitter, blur |
| Loss | Dice + BCE weighted |
| Target IoU | ≥0.85 (bench-validated: 0.89 on internal labeled dataset) |
Post-Processing Pipeline
- Threshold binary mask at 0.5
- Connected component labeling
- Contour extraction
- Polygon simplification (Douglas-Peucker)
- Filter by area: min 2m², max 100m²
- Filter by circularity: >0.4
- Merge overlapping polygons from adjacent tiles
V-Cut Detection (Damage Classification)
Training Data
- Crown crops labeled: healthy, damaged, confounded
- Confounded class includes: wind damage, nutrient stress, mechanical pruning
- Target: 3,000+ labeled examples per class
Model Architecture
Primary model: CNN classifier (ResNet-50 backbone)
| Component | Specification |
|---|---|
| Input | 256×256 crown crops, normalized |
| Output | V-cut probability [0, 1] |
| Training | Transfer learning from ImageNet |
| Validation | Cross-validation + early stopping |
Escalation Protocol
When V-cut confidence is in uncertain range (0.4-0.8):
- Retrieve all raw frames overlapping crown centroid
- Select top-3 by: lowest blur, best sun angle, highest overlap
- Extract high-resolution crops from raw frames
- Re-run inference on raw crops
- Use max score if any crop >0.7; min score if all <0.3
- Otherwise flag for calibration sampling
Spectral Analysis
Vegetation Indices
| Index | Formula | Use |
|---|---|---|
| NDVI | (NIR - R) / (NIR + R) | General health |
| NDRE | (NIR - RE) / (NIR + RE) | Chlorophyll stress (early) |
| GNDVI | (NIR - G) / (NIR + G) | Canopy density |
| SAVI | (NIR - R) / (NIR + R + L) × (1 + L) | Soil-adjusted |
Per-Crown Aggregation
For each crown polygon, extract from raster:
- Mean, standard deviation
- 10th and 90th percentiles
- Anomaly score vs. neighborhood baseline
Risk Score Fusion
Feature Vector (~50 features)
| Category | Features |
|---|---|
| Damage | V-cut probability, count, extent ratio |
| Spectral | NDVI/NDRE mean, std, percentiles, anomaly |
| Temporal | NDVI delta, risk delta, days since observation |
| Spatial | Distance to trap, trap count within 500m, zone prevalence |
| Structural | Crown area, circularity, height (CHM), area change |
XGBoost Configuration
- Objective: binary logistic
- Calibration: isotonic regression post-hoc
- Explainability: SHAP TreeExplainer
- Output: calibrated probability [0, 1] + top-5 feature drivers
Performance Targets
| Metric | Target | Status |
|---|---|---|
| Segmentation IoU | ≥0.85 | Bench-validated (0.89 on internal labeled dataset, n=2,400 tiles) |
| V-cut precision | ≥0.80 | Target |
| V-cut recall | ≥0.85 | Target |
| Fusion PPV | ≥50% | Target |
| Fusion Recall | ≥85% | Target |
| Calibration (Brier) | <0.25 | Target |
| Processing latency | <24 hours | Target |
Appendix D: Verification & Truth Budget
This appendix describes the outcome verification system and the calibration sampling strategy that provides biological anchoring for program claims.
The Verification Problem
Imaging-based detection and outcome tracking is self-referential: if the model is systematically wrong, outcomes classified by model outputs would not detect the error.
The Truth Budget solves this by allocating a bounded monthly investment in independent biological verification that provides ground truth for model calibration and outcome anchoring.
Operational Monitoring Cadence
Key insight: Verification is not a special follow-up activity—it happens automatically as part of the regular survey cadence.
How It Actually Works
| Component | Reality |
|---|---|
| Survey frequency | ~30 days for entire footprint (with small variance for weather/scheduling) |
| Treatment tracking | Per-palm: last_treatment_date, last_treatment_id |
| Outcome calculation | When survey processes, compute days_since_treatment for each treated palm |
| Window bucketing | Classify into T+30, T+60, T+90 windows based on actual elapsed time |
Data Model
-- Per-palm treatment history
palm.last_treatment_date -- most recent treatment timestamp
palm.last_treatment_id -- links to treatment record
palm.treatment_count -- total treatments received
-- When each survey processes:
FOR each palm WITH last_treatment_date:
days_since = survey.date - palm.last_treatment_date
IF days_since BETWEEN 25-40: bucket = 'T+30'
IF days_since BETWEEN 55-70: bucket = 'T+60'
IF days_since BETWEEN 85-100: bucket = 'T+90'
-- Compute outcome metrics at this observation point
delta_risk = current_risk - pre_treatment_risk
delta_composite = weighted_outcome_score(current, pre)
Why This Matters
- No special flights: Verification piggybacks on routine surveys
- Natural variance handled: Actual days (28, 32, 35) classified into windows
- Continuous tracking: Every survey updates outcome status for all treated palms
- Asset-level history: Full treatment timeline per palm, not just snapshots
Truth Budget Architecture
Three-Stream Allocation (~200 inspections/month)
| Stream | Purpose | Selection | Size |
|---|---|---|---|
| A. Audit | Measure PPV, recall, calibration | Stratified random | 100 |
| B. Active Learning | Improve model on hard cases | Near-threshold, disagreements | 50 |
| C. Sentinel | Detect hidden spread | Below-threshold, fixed plots | 50 |
Remote Close Inspection (RCI)
Inspection drones capturing crown-level evidence. RCI is the default Level 4A verification modality.
RCI Independence Specification
| Requirement | Specification |
|---|---|
| Separate flight | Not the survey flight; dedicated RCI mission |
| Distinct geometry | 3-8m altitude (vs. 60-120m survey altitude) |
| Separate operator | Inspector ≠ survey pilot |
| Blinding | Reviewer does not see risk score before classification |
Outcome Classification
| Category | Criteria | Interpretation |
|---|---|---|
| Improved | Δcomposite > +0.15 AND no new V-cuts | Treatment effective |
| Stable | Δcomposite -0.10 to +0.15 | Inconclusive |
| Declined | Δcomposite < -0.10 OR new V-cuts | Possible failure or reinfest |
| Confounded | Non-CRB stressor identified | Excluded from effectiveness calc |
| Not Yet Observable | <25 days since treatment | Too early to classify |
Stratified Estimator Specification
- Stratum Weights: Proportional to registry counts per stratum
- Stream Treatment: Stream A for PPV/recall; B+C tracked separately
- Variance: Per-stratum Wilson interval; linearized aggregation
- Finite Population Correction: Applied when sample >5% of stratum
Appendix E: Data Governance
This appendix describes the data governance framework, audit trail architecture, and compliance controls that make SkyGuard operations defensible.
Why Auditability Is First-Class
Biosecurity operations face scrutiny from: regulators (aviation, pesticide), community stakeholders, program funders, interagency partners, media and public interest groups.
SkyGuard is designed so that every decision is traceable, every action is logged, and every claim is supported by documented evidence.
Immutable Event Logging
All operational events are recorded in an append-only log with hash chaining:
- Survey ingestion and QA attestation
- Registry match/create decisions
- Model version used for scoring
- Threshold snapshot used for queueing
- Batch approval actions (who/when/what)
- Mission package generation and release
- Execution logs and reconciliation deltas
- Verification outcomes and confounder flags
Implementation note: For Phase 1, the audit_event table with hash chaining provides the required immutability guarantees. Kafka or equivalent streaming infrastructure can be added in Phase 2 if real-time event processing or external integration requires it. The audit properties (append-only, hash-linked, tamper-evident) are preserved in either implementation.
Evidence Packets
Each treated palm produces a complete evidence packet containing:
- Palm ID and registry history
- Pre-treatment imagery and features
- Risk score with SHAP explanation
- Mission target record with constraints
- Execution trace excerpt
- T+30/60/90 verification outcomes
Data Integrity and Recovery
SkyGuard data is protected through layered backup and recovery mechanisms:
| Component | Strategy | Recovery Target |
|---|---|---|
| Palm Registry | Daily DB snapshots + WAL archiving | RPO <1 hour |
| Raw Imagery | Replicated object storage | No data loss (source of truth) |
| Audit Log | Append-only with cold storage export | Full history preserved |
| Model Artifacts | Versioned in registry | All versions retained |
Full backup and recovery specifications in Appendix B.
System Boundary
SkyGuard provides detection, targeting, and verification. The following remain operator responsibility:
| Area | SkyGuard Provides | Operator Responsible For |
|---|---|---|
| Land access | Target coordinates | Access permissions |
| Pesticide handling | Application coordinates | Procurement, storage, disposal |
| Incident response | Audit trail, logs | Spill response, complaints |
| Licensing | Credential verification | Maintaining valid licenses |
Appendix F: Validation Plan
This appendix provides detailed validation gates, evidence artifacts, and go/no-go criteria for each phase of SkyGuard deployment.
What "Ready" Means
SkyGuard is "ready for program use" when it produces decision-grade outputs on schedule, with known error bounds, and with defensible records.
Validation Phases
Phase 0: Bench Validation (4-8 weeks)
Exit criteria: Processing latency meets requirement; QA gates functional; audit log integrity verified.
Phase 1: Pilot Launch (Month 1-3)
Exit criteria: ≥95% coverage completion; processing latency stable; mission reconciliation within tolerance.
Phase 2: Verification Emergence (Month 4-6)
Exit criteria: Monthly outcome reports without manual rework; confounders documented.
Phase 3: Scale Proof (Month 7-12)
Exit criteria: Model improvement demonstrated; sustained operations with defined staffing; evidence quality supports public reporting.
Go/No-Go Decision Framework
Hard Stops
- Audit log integrity failure
- Unacceptable coverage gaps
- Credential/compliance failure
- Pipeline non-reproducibility
Performance Floors
- PPV minimum: <30% sustained → Halt operations
- Recall minimum: <70% sustained → Threshold adjustment required
- Verification missingness: >25% → Coverage remediation required
Appendix G: Implementation Stack
This appendix provides a reference implementation architecture and sprint-based development plan for engineering estimation.
Component Selection Matrix
| Layer | Component | Primary Option | Alternative |
|---|---|---|---|
| Object Storage | Raw imagery, artifacts | AWS S3 | MinIO (on-prem) |
| Spatial Database | Registry, observations | PostgreSQL + PostGIS | — |
| Event Log | Audit trail | Append-only SQL table | Apache Kafka |
| Orchestration | Pipeline DAGs | Prefect | Dagster, Airflow |
| Photogrammetry | Ortho/CHM generation | Pix4D | OpenDroneMap |
| ML Framework | Segmentation, CNN | PyTorch | — |
| Risk Scoring | Fusion model | XGBoost | LightGBM |
| Explainability | Feature attribution | SHAP | — |
| Model Registry | Versioning, tracking | MLflow | W&B |
| API Framework | REST endpoints | FastAPI | Flask |
| Frontend | Dashboard | React | Vue |
| Mapping | Spatial visualization | Mapbox GL | Leaflet |
| Routing | Mission optimization | OR-Tools | OSRM |
| Local LLM (Phase 2) | Natural language interface | Llama 3 / Mistral (fine-tuned) | Phi-3, Qwen |
| LLM Serving (Phase 2) | Inference runtime | vLLM / llama.cpp | Ollama, TGI |
| Report Gen | Document export | WeasyPrint | ReportLab |
Development Sprints (2-week cycles)
Sprint 0: Kickoff & Design
- Stakeholder workshops with municipal partners, pilots, applicators
- Sensor selection: RGB vs. multispectral decision
- Draft flight SOPs (altitude, speed, overlap, geofencing)
- Infrastructure selection: cloud vs. on-prem
- Define naming conventions for survey IDs, model versions
- Sketch initial database schema
Sprint 1: Data Plane Setup
- Provision object storage with access policies and lifecycle rules
- Deploy PostgreSQL + PostGIS; create base tables
- Implement event log (append-only table or Kafka)
- Write ingestion scripts (upload + log events)
- Configure automated backups and restoration runbook
Sprint 2: Photogrammetry Pipeline
- Containerize photogrammetry engine (Docker)
- Build ortho generation wrapper script
- Implement QA metrics: blur, exposure, coverage gaps
- Create attestation logging for audit trail
- Define reflight vs. accept thresholds
Sprint 3: Palm Detection
- Data annotation: label palm crowns in QGIS
- Train U-Net segmentation model (PyTorch)
- Package model for inference (TorchScript/ONNX)
- Build inference pipeline → crown polygons → PostGIS
- Implement registry insertion service
Sprint 4: Damage Classification
- Create labeled dataset: healthy / damaged / confounded
- Train CNN classifier (ResNet-50, transfer learning)
- Implement spectral index computation (GDAL/Rasterio)
- Integrate features into unified per-palm feature set
Sprint 5: Risk Scoring
- Assemble ~50 feature vector per palm
- Train XGBoost fusion model
- Apply isotonic calibration
- Integrate SHAP for explainability
- Implement configurable threshold + holdout logic
Sprint 6: Registry Matching
- Develop matching algorithm (spatial + shape + spectral)
- Handle merge/split cases with confidence scoring
- Implement registry update service with event logging
- Create unit tests with synthetic edge cases
Sprint 7: Mission Generation
- Implement clustering (DBSCAN / grid-based)
- Integrate routing optimization (OR-Tools)
- Define mission JSON schema with constraints
- Build mission generator service
- Create approval API endpoints
Sprint 8: API & Dashboard
- Design REST API (FastAPI) with OpenAPI spec
- Implement OAuth/JWT authentication + RBAC
- Build React frontend with routing and state
- Integrate Mapbox for spatial visualization
- Create approval screens and threshold controls
Sprint 9: Execution Logging
- Define execution log schema (GPS track, events, weather)
- Build upload API + CLI tool for applicators
- Implement reconciliation service (planned vs. actual)
- Create exception handling workflow
- Add execution stats to dashboard
Sprint 10: Verification & Outcome Tracking
- Add treatment tracking fields to palm table (
last_treatment_date,last_treatment_id) - Implement
days_since_treatmentcalculation on each survey run - Build window bucketing logic (T+30/60/90 based on actual elapsed days)
- Implement change detection: pre-treatment vs. current observation
- Build outcome classifier with category rules
- Add confounder detection (drought, storm, pruning flags)
- Generate evidence packets per treated palm
- Track "Not Yet Observable" status for recent treatments
Sprint 11: Model Maintenance
- Design stratified sampling for ground-truth
- Build PPV/recall/calibration dashboards
- Implement drift monitoring with alerts
- Automate retraining pipeline with MLflow
- Create canary deployment workflow
Sprint 12: Governance Hardening
- Implement hash-chained immutable audit log
- Finalize RBAC across all endpoints
- Compliance review with legal counsel
- Define data retention policies and scripts
- Document incident response procedures
Sprint 13: AI Query Interface (Phase 2)
Phase 2 enhancement—not required for core pipeline operations. Schedule after Phase 1 validation.
- Select and deploy local LLM (Llama 3 / Mistral, 7B-13B range)
- Set up inference server (vLLM or llama.cpp with GPU/CPU quantization)
- Build text-to-SQL layer for common query patterns (SELECT-only)
- Implement data grounding: inject query results into prompt
- Create narrative templates for standard summaries (zone, treatment, outcome)
- Implement query security: authorized views, audit logging
- Build export pipeline (narrative → PDF/CSV)
- User acceptance testing with program managers
Deployment Configurations
Docker Compose (Development)
services:
db:
image: postgis/postgis:15-3.3
volumes: [pgdata:/var/lib/postgresql/data]
minio:
image: minio/minio
command: server /data --console-address ":9001"
api:
build: {context: ., dockerfile: Dockerfile.api}
depends_on: [db, minio]
worker:
build: {context: ., dockerfile: Dockerfile.pipeline}
deploy:
resources:
reservations:
devices: [{capabilities: [gpu]}]
Kubernetes (Production)
apiVersion: apps/v1
kind: Deployment
metadata:
name: skyguard-api
spec:
replicas: 3
template:
spec:
containers:
- name: api
image: skyguard/api:latest
resources:
requests: {memory: 512Mi, cpu: 500m}
limits: {memory: 1Gi, cpu: 1000m}
Cost Estimation (Fully Loaded)
Note: This estimate includes software licenses and higher storage/network assumptions. See Appendix B for cloud-infrastructure-only baseline.
| Category | Item | Monthly |
|---|---|---|
| Infrastructure | Compute (API + workers) | $600 |
| GPU inference - ML models (burst) | $300 | |
| Database (PostGIS) | $250 | |
| Storage + Network | $300 | |
| Software | Pix4D (if commercial) | $350 |
| Mapbox / monitoring | $100 | |
| Total Infrastructure (Phase 1) | ~$1,900/month | |
| Phase 2 | Local LLM inference (add) | +$150 |
Note: LLM query interface is Phase 2. Phase 1 focuses on core detection/treatment/outcome pipeline.
Build vs. Buy Summary
Core IP (Build In-House)
- Registry persistence and matching logic
- Risk scoring + threshold decision contract
- Verification and outcome classification
- Compliance-by-design mission packaging
- Immutable audit trail architecture
- Calibration sampling and drift monitoring
Commodity (Buy / Integrate)
- Photogrammetry engine
- Cloud infrastructure
- Mapping and visualization
- Routing optimization libraries
Appendix H: References
This appendix provides references supporting background claims and technical approaches.
CRB Biology and Impact
Bedford, G.O. (1980). Biology, ecology, and control of palm rhinoceros beetles. Annual Review of Entomology, 25(1), 309-339.
Gressitt, J.L. (1953). The coconut rhinoceros beetle with particular reference to Palau Islands. Bernice P. Bishop Museum Bulletin, 212.
Hawaiʻi Department of Agriculture (HDOA). CRB Response Program materials (various).
Remote Sensing and Tree Detection
Weinstein, B.G., et al. (2019). Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks. Remote Sensing, 11(11), 1309.
Osco, L.P., et al. (2020). A review on deep learning in UAV remote sensing. International Journal of Applied Earth Observation and Geoinformation, 102, 102456.
Machine Learning Methods
Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of KDD, 785-794.
Lundberg, S.M., & Lee, S.I. (2017). A unified approach to interpreting model predictions. NeurIPS, 30.
Ronneberger, O., et al. (2015). U-Net: Convolutional networks for biomedical image segmentation. MICCAI, 234-241.
Software and Tools
| Tool | Use | Reference |
|---|---|---|
| PyTorch | Deep learning | pytorch.org |
| XGBoost | Gradient boosting | xgboost.readthedocs.io |
| SHAP | Explainability | shap.readthedocs.io |
| PostGIS | Spatial database | postgis.net |
| FastAPI | API framework | fastapi.tiangolo.com |
Market Research and Policy Sources
Hawaii Invasive Species Council (2025). Coconut Rhinoceros Beetle (CRB) Eradication Battle Plan. dlnr.hawaii.gov/hisc
Hawaii Department of Agriculture and Biosecurity. HB643 Testimony, FY2025. capitol.hawaii.gov
Civil Beat (2025). Hawaiʻi Primes Itself to Battle Biosecurity Threats. civilbeat.org
Civil Beat (2024). Disease Could Kill Most of Hawaii's ʻŌhiʻa Forests in 20 Years. civilbeat.org
U.S. Congress Joint Economic Committee (2024). How the Renewed Compacts of Free Association Support U.S. Economic, National Security, and Climate Goals. jec.senate.gov
U.S. Department of the Interior. Office of Insular Affairs—Compacts of Free Association. doi.gov/oia
USDA NIFA. SBIR/STTR Program Information, FY2025-2026. nifa.usda.gov
CRB Hawaii. The CRB Response; 10 Years with CRB in Hawaii. crbhawaii.org
Governor Josh Green, M.D. (2024). News Release on Biosecurity Funding for Dept. of Agriculture. governor.hawaii.gov
Honolulu Parks and Urban Forestry
City & County of Honolulu DPR. About Parks and Recreation (4,986 acres, 403 facilities). honolulu.gov/dpr
City & County of Honolulu DPR. Division of Urban Forestry (~100,000 street and park trees). honolulu.gov/dpr
Smart Trees Pacific. Citizen Foresters program; Hawaii Forest Action Plan gap analysis. smarttreespacific.org
Honolulu Office of the City Auditor (2020). Audit of DPR Expenditures and Operations.
This reference list will be expanded as the program develops.
Appendix I: Beachhead Customer Profile & Build Estimate
This appendix provides a concrete build estimate for an early adopter deployment, demonstrating that SkyGuard has been scoped to implementation-ready detail.
Beachhead Customer: City & County of Honolulu
| Attribute | Value |
|---|---|
| Coverage Area | ~5,000 acres of municipal park land (DPR manages 4,986 acres across 403 facilities) |
| Estimated Palm Count | 30,000–70,000 palms (working estimate; see note below) |
| Survey Cadence | Monthly during active season |
| Budget Authority | Municipal Parks & Recreation + Emergency Management |
Note on palm count: Hawaii lacks a comprehensive palm census—a gap explicitly identified by the Hawaii Forest Action Plan and Smart Trees Pacific. The CRB Response program has surveyed 95,000+ palms across Oahu (including private land, resorts, and military installations), but no species-specific inventory exists for municipal land. Establishing an accurate baseline is a core deliverable of the pilot surveillance program. This data gap strengthens the value proposition: SkyGuard creates the authoritative palm registry that currently does not exist.
Why This Customer First
- Regulatory pressure: State and federal scrutiny on CRB containment
- Political visibility: Palm loss in public parks generates constituent complaints
- Budget capacity: Municipal government with dedicated pest management funding
- Reference value: Success here validates SkyGuard for other Pacific jurisdictions (Guam, CNMI, American Samoa)
- Existing relationships: Tethra Systems is Hawaii-based with local credibility
- Data gap opportunity: No authoritative palm inventory exists—pilot creates foundational asset
What "Built" Means for This Contract
A minimum defensible product requires the full closed-loop spine:
- Ingest survey imagery into immutable, auditable data plane
- Photogrammetry pipeline with QA attestations
- Palm detection and persistent registry
- Damage classification and feature extraction
- Risk scoring with calibration and explainability
- Registry matching across survey cycles
- Treatment mission generation with approvals
- Execution logging and reconciliation
- Ops dashboard for coverage, risk maps, and approvals
Verification and outcome tracking (T+30/60/90 windows) is built in parallel but outcomes naturally lag treatment by 90+ days.
Build Timeline
Milestone A: Treatment-Capable MVP (~20 weeks)
Monitoring → risk queue → missions → execution reconciliation → dashboard
| Sprint | Duration | Deliverable |
|---|---|---|
| 0: Kickoff & Design | 2 weeks | System design, schema, naming conventions |
| 1: Data Plane | 2 weeks | Object storage + PostGIS + event log + ingestion |
| 2: Photogrammetry | 2 weeks | Containerized pipeline, QA metrics, attestation logging |
| 3: Palm Detection | 2 weeks | U-Net segmentation, registry insertion service |
| 4: Damage Classification | 2 weeks | CNN classifier, spectral indices, unified features |
| 5: Risk Scoring | 2 weeks | XGBoost fusion, calibration, SHAP, threshold config |
| 6: Registry Matching | 2 weeks | Cross-cycle matching, merge/split handling |
| 7: Mission Generation | 2 weeks | Clustering, routing, mission schema, approvals |
| 8: API & Dashboard | 2 weeks | FastAPI + React/Mapbox + approval workflows |
| 9: Execution Logging | 2 weeks | Upload API, reconciliation, exception handling |
Result: Process surveys, produce prioritized treatment queue, generate missions, capture execution logs, present in dashboard.
Milestone B: Program-Grade Robustness (+6 weeks)
| Sprint | Duration | Deliverable |
|---|---|---|
| 10: Verification | 2 weeks | T+30/60/90 windows, outcome tracking, evidence packets |
| 11: Model Maintenance | 2 weeks | Drift alerts, calibration dashboards, retraining workflow |
| 12: Governance | 2 weeks | Hash-chained audit log, RBAC finalization, compliance docs |
Important: Engineering completes at ~26 weeks, but outcome data (T+90 windows) matures ~3 months after first treatment cohorts.
Build Cost Estimate
Staffing Plan (20-26 weeks)
| Role | Allocation | Duration | FTE-Months |
|---|---|---|---|
| Tech Lead / Backend Engineer | 1.0 | 6 months | 6 |
| Geospatial / Data Engineer | 0.75–1.0 | 5 months | 4–5 |
| ML Engineer (Computer Vision) | 1.0 | 6 months | 6 |
| Frontend (React/Mapbox) | 1.0 | 4–5 months | 4–5 |
| DevOps / SRE | 0.5 | 6 months | 3 |
| QA / Test Automation | 0.5 | 4 months | 2 |
| Product / Delivery (TPM) | 0.5 | 6 months | 3 |
| Total | 28–30 |
Cost Summary
| Category | Low | High | Notes |
|---|---|---|---|
| Core Engineering Labor | $500K | $750K | 28-30 FTE-months @ $18-25K/month |
| Contingency (15-30%) | $75K | $225K | Integration, data issues, compliance iterations |
| Data Labeling | $10K | $40K | Crown annotations + damage classes |
| GPU / Compute (training) | $10K | $60K | Training runs + pipeline testing |
| Tooling / Licenses | $5K | $15K | Pix4D, Mapbox during build |
| Total Build Cost | $600K | $1.0M | Most likely: $700K–$850K |
Note: This is one-time build cost. Monthly platform operations are ~$1,900/month (see Appendix G).
Operational Scaling Path
Platform development scales differently than field operations. As contracts expand beyond the initial pilot:
- Survey operations: Additional drone pilots and equipment for increased coverage area or frequency
- Ground operations: Treatment crews scale with treatment volume, not platform complexity
- Expansion territories: Local operator partnerships (rather than direct staffing) for Guam, CNMI, and COFA deployments
- SOPs and training: Standardized procedures and certification programs enable consistent quality across operators
The platform architecture scales horizontally with infrastructure; field operations scale through partnerships and standardized training.
Validation Milestones
Build-Measure-Learn checkpoints for falsifiable progress:
| Week | Milestone | Hypothesis Tested |
|---|---|---|
| 8 | End-to-end survey processing | Technical risk: Can we reliably process real survey data through detection? |
| 12 | Risk model vs. expert judgment | Value hypothesis: Does the model flag the same palms experts would prioritize? |
| 20 | Mission approval by customer | Customer validation: Will Honolulu approve treatment missions generated by this system? |
| Month 6+ | Outcome data from T+90 windows | Impact hypothesis: Do treated palms show measurably better outcomes than untreated? |
Technical Risk Mitigations
The following risks have been identified with specific mitigations built into the sprint plan:
1. Photogrammetry Variability
Risk: Ortho/CHM quality inconsistency causes downstream ML instability.
Mitigation: QA metrics + accept/reflight thresholds + attestation logging in Sprint 2. Store products as Cloud-Optimized GeoTIFFs (COGs) with radiometric calibration versioning. Create "golden survey" regression artifacts for pipeline change validation.
2. Registry Identity Stability
Risk: Unstable palm IDs across survey cycles makes treatment history and outcomes untrustworthy.
Mitigation: Registry matching produces a match graph (edges with confidence + reasons), not just hard assignment. Low-confidence merges/splits route to human resolution queue without blocking pipeline.
3. Model Defensibility
Risk: Municipal stakeholders challenge "why did we treat this tree."
Mitigation: Isotonic calibration + SHAP explainability in Sprint 5. Log model version + feature vector hash + SHAP summary into audit trail for each queued palm. Versioned thresholds with governance controls.
4. Dashboard Performance
Risk: Mapbox rendering degrades with tens of thousands of palms.
Mitigation: Serve palms as vector tiles or pre-aggregated grid layers. PostGIS indices aligned to common query patterns ("high-risk this week", "treated last 30 days").
5. Governance and Security
Risk: City contracts require audit-grade security controls.
Mitigation: Audit-log primitives in Sprint 1–2, hash-chaining in Sprint 12. Every mutation path idempotent and event-logged from day 1. OAuth/JWT + RBAC + TLS + AES-256 at rest.
Speed-to-Market Levers
- Skip Phase 2 LLM: Not required for core operations; schedule after validation
- Cloud-first delivery: Docker Compose (dev) + Kubernetes (prod) avoids procurement delays
- Parallelize early: Data labeling + dashboard build start Week 1 while data plane is built
- Define success gates early: Use paper's metrics (coverage, latency, registry stability, PPV/recall)
This estimate demonstrates that SkyGuard has been scoped to implementation-ready detail. The beachhead customer provides a concrete validation target for the platform's core value propositions.