SkyGuard · Precision Biosecurity
SkyGuard - Precision Biosecurity

White Paper v1.1 · December 2025

Tethra Systems LLC · Honolulu, Hawaiʻi


Traditional CRB response treats trees. SkyGuard tracks the infestation—at scale.

SkyGuard is a precision biosecurity platform that transforms how invasive species programs operate. By combining repeatable aerial surveillance, automated detection, and closed-loop outcome verification, SkyGuard enables programs to answer a question that has historically gone unanswered: Is the intervention working—at the population level?


Key Claims

  1. Scalable surveillance without linear labor growth
  2. Persistent identity linking observations and treatments over time
  3. Measurable outcomes with explicit denominators and confidence bounds
  4. Operational defensibility via immutable audit trails and evidence packets
  5. Continuous improvement through calibration sampling and outcome learning

The Opportunity

$150M+ Total Addressable Market
$100K Seed Unlocks Grants
20-26 wks To Contract-Ready

CRB threatens $500M-$1B in economic damage across Hawaii and Pacific territories. USDA is ending direct CRB funding, creating urgency for cost-effective alternatives. A strategic seed investment funds grant capture and business development to unlock non-dilutive federal funding and a beachhead municipal contract.

Implementation-ready: Scoped to sprint-level detail with an identified beachhead customer (City & County of Honolulu), direct access to Mayor's office, and validated build estimates. See Market Opportunity and Appendix I.


This document is confidential and intended for authorized distribution only.

Executive Summary

The Thesis

The Coconut Rhinoceros Beetle (CRB) represents one of the most significant biosecurity threats facing Hawaiʻi's palm populations. Current response programs share a common architecture: inspect individual trees, confirm damage, treat confirmed cases. This tree-centric approach has predictable failure modes at scale—coverage gaps, no persistence, no verification, and no population intelligence.

SkyGuard inverts this model. Instead of treating trees one at a time based on manual inspection, SkyGuard tracks the infestation itself through systematic aerial surveillance, automated detection, and verified outcomes. The result is a program that can scale coverage without scaling headcount, and—critically—can demonstrate whether interventions are working.

What Changes

Traditional ApproachSkyGuard Approach
Complaint-driven samplingSystematic survey coverage
No persistent palm IDsRegistry with stable identifiers
Single-indicator detectionMulti-modal fusion scoring
Per-palm inspection gatesThreshold-based treatment queuing
Treatment assumed effectiveImaging-verified outcomes
No learning loopCalibration sampling improves accuracy

Treatment Doctrine

SkyGuard is designed for contexts where the dominant risk is uncontrolled spread. In this framing, under-treatment is the failure mode. The system is calibrated to favor recall over precision—treating more suspected palms is generally preferable to missing infestations that seed new clusters.

The Opportunity

The immediate beachhead is municipal CRB response in Hawaiʻi. The City & County of Honolulu represents a single, well-defined customer with active biosecurity coordination needs and an existing relationship with Tethra Systems—including direct access to the Mayor's office.

The broader market is substantial: CRB threatens $500M-$1B in economic damage across Hawaii and Pacific territories over the next decade. The total addressable market for Pacific biosecurity surveillance exceeds $150M annually, spanning Hawaii (4 counties), US territories (Guam, CNMI, American Samoa), and COFA states (Palau, FSM, Marshall Islands).

Funding strategy: A $100K seed investment funds grant writing and business development to capture non-dilutive federal funding (USDA SBIR, DOI OIA, NIFA programs) while securing a city pilot with NRE contribution. Platform build is funded primarily through grants; Tethra retains full IP. Beyond biosecurity, the same platform architecture unlocks broader federal resilience programs (FEMA hazard mitigation, NOAA climate resilience, DoD dual-use) representing additional funding pools exceeding $100M annually. See Market Opportunity for detailed analysis.

A contract-operational system can be built in ~20 weeks for an estimated $600K–$1.0M. See Appendix I for build estimate.

Why Now

SkyGuard is possible because five technology areas have matured simultaneously:

  1. High-resolution aerial imaging with stable radiometric calibration
  2. RTK positioning enabling consistent georegistration across surveys
  3. Production photogrammetry producing reliable survey products
  4. Practical ML pipelines for segmentation and interpretable scoring
  5. Operational drone execution with detailed logging and compliance controls

No single technology is novel. The value is in the system architecture that integrates these components into an operational program with measurable outcomes.

What Success Looks Like

A successful pilot produces:

  • Consistent coverage and processing latency
  • Stable registry performance across survey cycles
  • Measured detection accuracy (PPV/recall) on calibration samples
  • Outcome reporting for treated cohorts with explicit denominators
  • Evidence packets suitable for stakeholder accountability

SkyGuard does not promise eradication. It promises measurable progress with defensible methodology.

Market Opportunity

A $100K seed investment unlocks access to a $150M+ addressable market through strategic grant capture and government contract positioning.

Investment Thesis

The Pacific islands face an escalating biosecurity crisis. The Coconut Rhinoceros Beetle alone threatens $500 million to $1 billion in economic damage over the next decade, yet current spending on detection and response totals only $20-25 million annually—creating a massive gap that drone-based surveillance can help close.

The official Hawaii CRB Battle Plan (May 2025) explicitly calls for "mapping infestations using drones (with thermal imaging) and GIS technology"—a direct market signal. Meanwhile, USDA has announced it will end direct CRB Response funding, creating urgency for cost-effective alternatives.

Funding Strategy: How $100K Unlocks the Market

┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ SEED ROUND │────▶│ GRANT CAPTURE │────▶│ PILOT + NRE │────▶│ PRODUCTION REV │ │ $100K │ │ $500K-$1M+ │ │ $300K-$500K │ │ $2M+/year │ │ │ │ │ │ │ │ │ │ • Grant writing │ │ • USDA SBIR │ │ • Honolulu DPR │ │ • State contract│ │ • BD / LOIs │ │ • DOI OIA │ │ • MVP build │ │ • Multi-county │ │ • Tech proof │ │ • NIFA programs │ │ • Validation │ │ • Territory exp │ └─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ │ │ │ ▼ ▼ ▼ ▼ 3-6 months 6-12 months 12-18 months 18-36 months
PhaseInvestmentOutcomeRisk Mitigation
Seed$100KGrant applications + City LOI + technical proofSmall bet; unlocks non-dilutive capital
Grant Capture$500K-$1M+ (non-dilutive)Platform development fundingFederal/state money; retain full IP
PilotCity NRE contributionWorking system; reference customerCustomer co-investment = real demand signal
ProductionRevenue-fundedRecurring contracts; geographic expansionProven system; referenceable results

The Crisis Driving Demand

CRB has spread to every major Pacific jurisdiction. The virus-resistant CRB-G biotype defies traditional biocontrol.

JurisdictionStatusDetectionCurrent Response
Oahu, HawaiiEstablishedDec 2013Suppression only
KauaiActive, spreadingMay 2023Eradication attempted
Big IslandActive, spreadingOct 2023Eradication attempted
Maui CountyLimited detectionSep 2023Eradication attempted
GuamWidespreadSep 2007Suppression only
CNMI (Rota)EstablishedOct 2017Containment
PalauEstablishedPre-2016Biocontrol showing promise
Marshall IslandsState of emergencySep 2023FAO assistance active
FSMCRB-free, high riskN/APrevention/monitoring

Market Sizing: TAM / SAM / SOM

Total Addressable Market: $150M+ annually

CategoryAnnual ValueSource
Hawaii invasive species need$50-57MLegislative Reference Bureau
CRB economic damage potential$50-100M/year$500M-$1B over 10 years
Pacific territories (Guam, CNMI, Am. Samoa)$5-10MDOI OIA allocations
COFA states environment sector$10-20MCompact funding
Adjacent invasives (ROD, LFA, miconia)$20-30MCurrent program spend
TAM Total$135-217M

Serviceable Addressable Market: $21-35M annually

OpportunityValueFunding Source
CRB detection/monitoring contracts$5-10MUSDA APHIS, State, DOD
ROD aerial surveillance$4-6MFederal + State
HISC/ISC detection grants$3-4MState appropriations
DOI OIA biosecurity projects$2-3MTAP, CRNR programs
COFA state agriculture support$3-5MCompact environment sector
County-level contracts$2-3MHonolulu, Maui, Hawaii Co.
Federal SBIR/STTR technology$2-4MUSDA, potential DoD
SAM Total$21-35M

Serviceable Obtainable Market: Years 1-3

YearTarget RevenueSources
Year 1$500K-$1MSBIR Phase I ($175K × 1-2), DOI OIA TAP ($100-250K), State pilot ($150-300K)
Year 2$1.5-2.5MSBIR Phase II ($600K), Multi-county contracts ($500K-1M), COFA entry ($200-500K)
Year 3$3-5MState production contract, Territory expansion, Adjacent use cases

FY2026-2027 Funding Windows

Biosecurity-Focused Programs

ProgramDeadlineAmountFit
Hawaii State FY27 BudgetJan-May 2026VariableLegislative engagement window
USDA SBIR Phase IIFeb 2026$600KRequires Phase I
USDA PPA 7721Jun-Jul 2026$50-200KUH partnership recommended
DOI OIA Technical AssistanceAug 2026$100-500KPacific territory partnerships
USDA SBIR Phase ISep-Oct 2026$175KTopic 8.2 Plant Protection
NIFA SCRI Pre-applicationNov 2026Up to $2M/yearRequires academic partner

State Technology Development

ProgramAmountFit
HTDC SBIR Matching Grants$25-150K per awardMatches federal SBIR wins; Phase I ~$25K, Phase II ~$100K
Manufacturing Assistance ProgramUp to $100K (20% of costs)Hardware fabrication scale-up in Hawaii
Hawaii Tech Bridge (ONR)VariableDual-use technology validation

HTDC awarded $1.5M to 14 companies in the most recent cycle. SkyGuard's federal SBIR wins would automatically qualify for state matching funds.

Current Hawaii Funding Context

Hawaii's biosecurity spending has reached historic levels:

  • Department of Agriculture renamed to Department of Agriculture and Biosecurity (DAB)
  • Operating budget increased from $59M to $71M annually (20% increase)
  • $10M in new biosecurity initiatives (Act 231, SLH 2024)
  • $26M for biosecurity staffing
  • HISC funding at highest level since 2003 (~$10M)

Critical transition: USDA ending direct CRB Response funding (~$2M/year + $350K canine support). State HB643 pending to fill gap—creating urgency for cost-effective detection alternatives.

Adjacent Revenue Streams

The same surveillance infrastructure supports multiple aerial-detectable targets:

TargetDetection MethodMarket SizeGeographic Scope
Rapid ʻŌhiʻa DeathCanopy dieback (proven)$4M+/year proposedAll islands, Big Island focus
MiconiaPurple undersides visiblePart of $866M combinedAll islands
AlbiziaDistinctive canopy$13M+ storm damageAll islands
Little Fire Ant habitatTreatment verification$6.8B over 35 yearsBig Island, Maui, Oahu
Feral pig impactsThermal + habitat damageIncluded in fencingAll islands

Total invasive species economic impact: Miconia + Little Fire Ant alone cause $866 million annually in losses.

Geographic Expansion Pathway

PhaseTimelineMarketsRevenue Model
Phase 1Year 1-2Oahu pilot → multi-county HawaiiDirect service delivery
Phase 2Year 2-3Guam, CNMI (US territories)Platform + local operators
Phase 3Year 3-5COFA states (Palau, FSM, RMI)Processing-as-a-service
Phase 4Year 4+DoD installations, tourism sectorEnterprise contracts

COFA States: Distinct Funding Pathway

The renewed Compact of Free Association (signed March 2024) provides $7.1 billion over 20 years with Environment as a designated sector grant category:

  • $8.5M via USDA for FAS urban forestry and invasive species
  • $250M FSM, $200M RMI, $50M Palau in mandatory trust fund contributions (FY2024)
  • Access via Joint Economic Management Committees—requires local government partnerships
  • Partner institutions: College of Micronesia-FSM, Palau Community College, College of Marshall Islands

Private Sector Opportunities

While the initial go-to-market is government-focused, significant private sector demand exists for palm asset protection:

  • Resorts and hotels: Waikiki properties depend on palm aesthetics for tourism appeal; CRB damage directly impacts property value and guest experience
  • Golf courses: Hawaii's 80+ courses maintain extensive palm landscapes; several have already lost trees to CRB
  • Agricultural estates: Coconut and ornamental palm operations face direct crop losses
  • Military installations: Pearl Harbor, Schofield Barracks, and other DoD facilities maintain palm landscapes and have dedicated grounds budgets

Private sector contracts offer revenue diversification beyond government budget cycles and could be pursued in parallel once the municipal pilot demonstrates capability. These customers typically have faster procurement cycles and may pay premium rates for priority service.

Accelerator & Non-Dilutive Investment

ProgramAmountFit
Elemental Excelerator$200-500K non-dilutiveClimate resilience, environmental monitoring; Honolulu-based with strong local network
Blue Startups$50-100K + mentorshipHawaii-focused accelerator; tech commercialization
Hawaii Community Foundation$25-100K grantsEnvironmental and conservation grants; community engagement components

Elemental Excelerator (formerly Energy Excelerator) is particularly aligned—they invest in climate, water, agriculture, and "nature" technology solutions. Applications accepted on rolling basis with annual cohort selection. SkyGuard's focus on climate resilience and ecosystem protection makes it a strong candidate.

Beyond Biosecurity: Expansion Funding Pathway

CRB response is the beachhead, but SkyGuard's platform architecture supports broader environmental resilience applications. Once the biosecurity use case is validated, additional federal and state programs become accessible:

Federal Resilience Programs

ProgramAgencyAmountFit
Hazard Mitigation Grant Program (HMGP)FEMA$100K-$2M+Post-disaster early warning systems; 75% federal / 25% local match
Homeland Security Grant (UASI)DHS/FEMA$5-6M Hawaii allocationSurveillance tech for emergency response; requires agency partner
Climate Resilience Regional ChallengeNOAA$5-50MCoastal hazard monitoring, ecosystem protection; Hawaii received $68.5M in 2024
Transformational Habitat GrantsNOAA Fisheries$750K-$10MEcosystem restoration with climate resilience benefits
Defense Research (ONR/SERDP)DoDVariableDual-use environmental monitoring for Pacific installations

Expansion Use Cases

The same drone fleet, imaging sensors, and AI analytics that detect CRB damage can be adapted for:

  • Wildfire early detection: Thermal imaging for ignition detection in high-risk areas
  • Coastal erosion monitoring: Repeat surveys for shoreline change analysis
  • Post-disaster damage assessment: Rapid aerial surveys for FEMA/insurance documentation
  • Coral reef health monitoring: Coastal ecosystem surveillance
  • Infrastructure inspection: Utility corridors, bridges, remote facilities

Strategic sequencing: Biosecurity validation (Years 1-2) → Adjacent environmental use cases (Years 2-3) → Broader disaster resilience (Years 3-5). Each stage unlocks larger funding pools while building operational credibility.

Why Now

  1. Policy tailwinds: Hawaii tripled biosecurity funding; department renamed to emphasize biosecurity
  2. Funding transition: USDA ending direct CRB funding creates service vacuum
  3. Technology validation: CRB Battle Plan explicitly calls for drone thermal imaging
  4. Proven detection: ROD drone detection already validated at UH Hilo
  5. Market timing: Agricultural drone market growing 25%+ CAGR
  6. Data gap opportunity: No comprehensive palm inventory exists for Oahu—SkyGuard creates foundational asset
  7. Relationship access: Tethra has direct line to Mayor's office and Chief of Staff

See Appendix I for detailed build estimate and beachhead customer profile.

The Problem

Why CRB is a Hard Biosecurity Problem

The Coconut Rhinoceros Beetle presents a structural challenge that current response architectures are not designed to solve.

The detection delay problem: Observable frond damage—the characteristic V-cuts that trigger human recognition—typically appears well after initial infestation. By the time damage is visible, beetles have already bred and dispersed. Programs that respond to visible damage are always fighting the previous generation's spread.

Note: SkyGuard enables earlier detection than complaint-driven discovery by systematically surveying the full footprint rather than waiting for reports. This is not a claim of pre-symptomatic detection—imaging still requires visible indicators, but systematic coverage finds them sooner than reactive programs.

The coverage problem: Beetles are mobile and opportunistic. Breeding sites include mulch piles, green waste, dead palms, and other decaying organic matter spread across diverse land uses. No feasible inspection program can monitor every potential site continuously.

The verification problem: Traditional programs treat palms and assume effectiveness. Without systematic outcome tracking, there is no way to determine whether treatments are actually reducing infestation pressure or merely consuming resources.

Why "Treating Trees" Fails at Scale

A tree-centric workflow—inspect, confirm, treat—has predictable failure modes:

Coverage Gaps

Inspectors cannot be everywhere. A 30,000–70,000 palm footprint with monthly coverage would require an estimated 6-15 full-time inspectors just to maintain pre-treatment confirmation. (Illustrative: assumes ~150-200 palms/inspector/day, 20 working days/month, including travel and documentation time.) Activity spreads in the uninspected gaps.

No Persistence

Without stable palm identifiers, there is no way to link observations over time. Each survey is an isolated snapshot. Treatment history, outcome trajectories, and longitudinal patterns are invisible.

No Verification

When a palm is treated, the program cannot systematically determine whether the treatment worked. "Success" is asserted rather than measured.

No Population Intelligence

Tree-centric programs cannot answer population-level questions: Where is activity concentrating? Is prevalence increasing or decreasing? What is the program's actual effectiveness rate?

What a Modern Program Requires

CapabilityRequirement
Systematic surveillanceRepeatable coverage, not complaint-driven sampling
Persistent identityObservations and actions accumulate per palm over time
Governable decisionsThreshold-based targeting with explicit tradeoffs
Verified outcomesMeasurable treatment effectiveness with denominators
Operational auditabilityTraceable decisions for stakeholder defense

Evidence Base

Independent research validates both the severity of Hawaii's CRB crisis and the technical feasibility of SkyGuard's approach. This section presents third-party evidence with citations.

The Crisis Has Escalated Despite Intensive Response

CRB was first detected on December 23, 2013 at Joint Base Pearl Harbor-Hickam. Despite a decade of response efforts and millions in federal funding, the beetle has spread to four major islands:

IslandFirst DetectionCurrent Status (2025)
OahuDecember 2013Established; eradication abandoned
KauaiMay 2023Active response
Hawaii IslandOctober 2023Spreading from Waikoloa to Kona
MauiNovember 2023Active response

Sources: Hawaii DAB CRB Information; Governor's Office Press Release

Hawaii's CRB population is the CRB-G (Guam biotype)—resistant to the Oryctes nudivirus that controlled CRB elsewhere in the Pacific. CRB-G "hot spots" cause 50-90% palm damage versus 10-50% for susceptible strains.

Source: Pacific Community: New Biotype Discovery

Official Documents Acknowledge Program Limitations

"Detections on Maui, Kauaʻi, and Hawaiʻi in 2023 indicate our ability to eradicate CRB from the island of Oʻahu is no longer feasible."
— Hawaii Department of Agriculture, December 2024

The CRB Response program's own documentation reveals structural constraints:

  • Limited geographic scope: "Our program is primarily funded to manage populations near ports on Oahu and around high-risk commodities being shipped to other islands."
  • Capacity constraints: "Limited capacity to inspect, treat, and perform outreach on other islands... travel off Oahu is limited."
  • No regulatory authority: The program "does not have the authority to quarantine, make or enforce rules, or perform treatments without landowner authorization."

Source: CRB Hawaii Response Program

HDOA's December 2024 biosecurity report noted only 13 CRB surveys were conducted on Oahu over a 12-month period, with "staffing shortages and current budgetary limitations" forcing focus on "maintaining projects versus active testing."

The CRB Battle Plan Calls for Drone Surveillance

The official May 2025 CRB Eradication Battle Plan prepared for the Hawaii Invasive Species Council explicitly recommends drone-based detection:

"Map infestations using community reports, drones (with thermal imaging), and GIS technology. Focus on target hotspots first for efficient management."
— CRB Battle Plan, Step 1: Surveillance and Mapping

Source: HISC CRB Battle Plan Presentation (PDF)

Drone Detection Technology Is Validated

UH Hilo's Spatial Data Analysis and Visualization Laboratory has validated drone-based tree disease detection in Hawaiian conditions:

ApplicationAccuracyMethod
Rapid Ohia Death detection79-100%Multispectral imaging (MicaSense sensors)
Miconia detection (AI-assisted)83% recallDeep convolutional neural networks
Forest health monitoring<6.8% errorHyperspectral at 7m² threshold

Sources: UH Hilo: Ryan Perroy ROD Research; Remote Sensing (2020)

Dr. Ryan Perroy's team won the $70,000 National Park Service 'Ōhi'a Challenge in 2019 for their drone-based ROD detection approach. The same platform architecture applies directly to CRB damage detection.

State Biosecurity Funding Has Tripled

Hawaii's biosecurity investment reached unprecedented levels in 2024-2025:

LegislationAmountKey Provisions
Act 231 (2024)$10MTargeted biosecurity initiatives; $1.2M for CRB (drones, chemicals, nets)
Act 236 (2025)$8.5MDepartment renamed to Agriculture & Biosecurity; HISC funding
Act 250 (2025)$26.6M44 new permanent biosecurity positions

Total state biosecurity funding increased from $6.2M to $19.2M annually—more than tripling since FY2023.

Sources: DAB Biosecurity Legislation; Spectrum News

Economic Damage Projections Are Severe

ThreatProjected ImpactTimeframe
CRB$500M – $1BNext decade
Little Fire Ant$1.2B treatment costsCumulative
Rapid Ohia Death1M+ trees killedTo date (270,000 acres)
Miconia + LFA combined$866M annuallyOngoing

Sources: HISC CRB Battle Plan; Earth.Org; ScienceDirect: LFA Economic Analysis

Current Detection Is Reactive, Not Systematic

Every new island detection occurred after beetles had already established breeding populations:

  • Kauai: First detection at green waste transfer station where beetles were already present
  • Hawaii Island: Discovered when a resident reported grubs in a decaying palm stump
  • Maui: Detected in nursery products after interisland spread

HDOA acknowledged interim rules were "launched too late to contain the spread of CRB on the island of Oahu." The 20-month delay on rule changes allowed continued interisland spread.

Pheromone traps—the primary detection tool—have documented limitations. HDOA states traps are "used for early detection of CRB infestations and are not effective in reducing populations." Dr. Aubrey Moore (University of Guam) noted at the 2025 science symposium that "pheromone trap catch may not be correlated with population density."

The Gap SkyGuard Fills

Current LimitationSkyGuard Capability
13 surveys/year on OahuSystematic coverage on defined cadence
Port-focused, limited neighbor island capacityMulti-island platform deployment
Reactive detection after establishmentProactive damage pattern identification
No outcome verificationPersistent tracking with measured effectiveness
No palm inventory existsRegistry with stable identifiers
Trap catch ≠ population dataPopulation-level intelligence

Adjacent Threats Multiply Platform Value

The same aerial surveillance infrastructure addresses multiple biosecurity threats:

Rapid Ohia Death (ROD)

Has killed over 1 million ʻōhiʻa trees across 270,000+ acres on Hawaii Island, with confirmed infections on Kauai and Oahu. Aerial detection is already validated—Carnegie Airborne Observatory research demonstrated that 80% of ROD-infected plants show detectable decreases in foliar chlorophyll and water content.

Little Fire Ant (LFA)

Over 4,000 infested locations on Hawaii Island. Economic analysis projects $1.2 billion in treatment costs. While direct aerial detection of ants is challenging, drones enable treatment delivery and habitat surveillance.

Miconia

UH Hilo research achieved 83% recall using AI-assisted drone imagery—outperforming human analysts (50-74% detection). The distinctive bicolored leaves provide clear aerial signatures.


Bottom line: Independent sources confirm that current CRB response is failing at scale, official plans call for drone-based surveillance, the technology is validated, and funding is available. SkyGuard addresses every documented gap.

The Solution

What SkyGuard Is

SkyGuard is a precision biosecurity operations platform. It combines repeatable aerial imaging with automated analysis to produce treatment queues and verified outcomes.

The system does not track individual beetles. It tracks infestation indicators at the palm level and produces population-level intelligence that is operationally useful and programmatically defensible.

Core Operating Loop

┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ SENSE │───▶│ ANALYZE │───▶│ TREAT │───▶│ VERIFY │───▶│ LEARN │ └────┬────┘ └─────────┘ └─────────┘ └─────────┘ └────┬────┘ ▲ │ │ (outcomes improve models) │ └───────────────────────────────────────────────────────────┘

Sense

Repeatable aerial surveys capture high-resolution RGB imagery (multispectral optional) on a defined cadence. RTK positioning ensures consistent georegistration across survey cycles.

Analyze

Survey imagery flows through an automated pipeline: Palm crown segmentation creates or updates registry entries. Multi-modal detection extracts damage and stress indicators. Fusion model produces calibrated risk scores per palm. Risk scores are explainable via feature attribution.

Treat

Treatment targeting is threshold-based: Palms above the configured risk threshold enter a treatment queue. Nearby palms are clustered into treatment zones. Mission packages are generated with compliance constraints. Human operators approve batches and licensed applicators execute.

Verify

Follow-up imaging enables outcome measurement: Treated palms are re-observed in T+30, T+60, and T+90 windows (±5-10 days based on survey cadence). Composite and risk-based outcome scores classify response. Confounders (drought, storm, other pests) are flagged and excluded. Outcome distributions are reported with explicit denominators.

Learn

Calibration sampling and verified outcomes close the loop: Independent inspections measure detection accuracy (PPV/recall). Outcome data informs model recalibration. Threshold governance responds to measured performance.

Human-in-the-Loop Governance

SkyGuard is explicitly not autonomous. Humans retain authority at key control points:

Control PointRoleAuthority
Threshold configurationProgram ManagerSets precision/recall operating point
Batch approvalProgram ManagerReviews and releases treatment missions
Field executionLicensed ApplicatorSafety abort; obstacle discretion
Model updatesData LeadApproves retraining and recalibration
Stop conditionsMultipleMandatory halts for performance breaches

AI Query Interface (Phase 2)

SkyGuard's architecture supports a natural language query layer powered by a local LLM, enabling operators to interact with program data conversationally and generate narrative summaries on demand. This capability is planned for Phase 2 deployment after core pipeline validation.

How It Works

User Query → Local LLM → Database/API Calls → Data Retrieval → Narrative Summary

The LLM is grounded in actual data—it queries the live registry, risk scores, and treatment history, then formats results into functional narrative summaries. No external API calls; all processing stays on-prem.

Example Queries

User AsksSystem Returns
"What are the highest-risk zones this week?"Ranked zone list with counts and avg scores
"Summarize December program status"Coverage, treatment, outcome metrics in prose
"Why did palm abc-123 score so high?"SHAP factors formatted as narrative
"Compare Zone A vs Zone B outcomes"Side-by-side outcome distributions
"Treatment history for Kailua sector"Timeline summary with key stats
"Draft a Q4 progress update"Aggregated metrics formatted for stakeholders

Design Principles

  • Grounded responses: All answers derived from database queries, not model training
  • Functional narrative: Clear summaries optimized for operational use, not prose quality
  • On-prem processing: Local LLM keeps data in-house, no external API dependencies
  • Domain-tuned: Fine-tuned on CRB/biosecurity vocabulary and report formats
  • Query restrictions: Text-to-SQL limited to SELECT operations on authorized views; no data modification permitted via natural language interface

Note: Core detection, treatment, and outcome tracking functionality operates independently of the AI query interface. The LLM layer enhances usability but is not on the critical path for program operations.

What SkyGuard Is Not

SkyGuard IsSkyGuard Is Not
Systematic surveillance at scaleDependent on per-palm manual inspection
Persistent registry with longitudinal trackingOne-time damage survey
Population-level intelligenceIndividual beetle tracker
Threshold-based targeting with governanceCase-by-case human authorization
Verified outcomes with denominatorsAssumed treatment effectiveness
Bounded claims with confidence intervals"X% accuracy" without methodology

Business Model & Moat

Why This Is a Business

A detection model is not a product. The market value exists where:

  • Decisions translate into field operations
  • Outcomes are verified and reported
  • Performance is measured and defended
  • The system improves with operational data

SkyGuard's product is the operating system for biosecurity response—not a point solution, but an integrated platform that captures value across the entire operational loop.

Revenue Model

Platform Subscription (Annual)

Recurring revenue for core platform access: Palm registry and risk scoring engine, program dashboards and reporting, audit trail and evidence packet generation, model monitoring and calibration tooling, governance tooling.

Operations Services (Variable)

Usage-based revenue scaling with program footprint: Survey operations (per survey or per acre), processing and analysis (per survey cycle), mission packaging and reconciliation (per batch), verification reporting.

Professional Services (As-Needed)

High-value services for program maturity: Calibration sampling coordination, compliance documentation, model customization, training.

Competitive Moat

1. Data Moat: Longitudinal Registry + Outcomes

SkyGuard creates a unique dataset that cannot be quickly replicated: persistent palm IDs across survey cycles, linked treatments and verified outcomes, calibration labels with known denominators, temporal patterns revealing population dynamics.

2. Defensibility Moat: Auditability + Evidence Packets

Municipal programs face stakeholder scrutiny. SkyGuard's value increases when accountability is demanded: every decision is traceable to inputs and rules, immutable logs prevent post-hoc modification, evidence packets support per-palm defense.

3. Governance Moat: Decision Contract

SkyGuard's threshold-based decision contract makes the program: Tunable (operators adjust the precision/recall tradeoff), Consistent (same inputs produce same outputs), Reviewable (decision rules are explicit and auditable), Transferable (staff turnover doesn't break institutional knowledge).

4. Execution Moat: Operations Playbook

Scaled biosecurity programs are won or lost on execution: repeatable survey SOPs with QA gates, mission packaging with compliance constraints, exception handling with documented escalation, verification protocols with outcome classification.

5. Relationship Moat: Early Adopter Access

The existing relationship with City & County of Honolulu provides: reduced time-to-pilot, real operational context for development, credibility with adjacent stakeholders, reference customer for expansion.

Unit Economics (Illustrative)

At pilot scale (~5,000 acres, estimated 30,000–70,000 palms, monthly surveys):

Cost ComponentTraditionalSkyGuard
Inspection labor6-20+ FTE~0.5 FTE (calibration only)
Survey operationsN/A2-3 FTE
Treatment operations4-6 FTE2-3 FTE
Program management2-3 FTE1-2 FTE
VerificationNoneIncluded in survey
Total12-30+ FTE6-9 FTE

Note: Traditional FTE range reflects uncertainty in palm count. SkyGuard economics are relatively stable because survey costs scale with acreage, not palm count.

Roadmap & Risks

Validation Pathway

SkyGuard follows a pilot-gated roadmap. Each phase produces specific evidence artifacts before advancing.

Phase 0: Bench Validation (4-8 weeks)

Goal: Demonstrate pipeline reproducibility on representative Hawaiʻi imagery.

MilestoneEvidence
Survey SOP establishedDocumented altitude/speed/overlap windows
Pipeline reproducibilitySame outputs for same inputs (within tolerance)
Registry schema validatedStable IDs, observation linking, spatial indexing
Baseline model performancePublished metrics on available labeled data

Phase 1: Pilot Launch (Month 1-3)

Goal: Establish operational tempo and baseline program metrics.

Phase 2: Verification Emergence (Month 4-6)

Goal: Begin outcome measurement with first T+90 cohorts.

Phase 3: Scale Proof (Month 7-12)

Goal: Use outcome data to recalibrate and prove scalability.

Key Risks and Mitigations

RiskImpactMitigation
Model drift / domain shiftDetection accuracy degradesContinuous calibration monitoring; threshold governance; retraining plan
Registry ID instabilityLongitudinal tracking breaksMulti-signal matching; stability KPIs; manual correction workflow
Coverage gapsVerification denominators incompleteGap detection; fill planning; missingness tracking
Applicator complianceRegulatory exposureCredential checks; mission constraints; immutable logs
Geographic expansionModels trained on Hawaii may underperform in new regions (Guam, Palau, etc.) due to different palm varieties, lighting, vegetationCalibration datasets from new locales before deployment; transfer learning approaches; local expert labeling partnerships
Operational scalingField operations (pilots, ground crews) may not scale as fast as contractsStandardized SOPs; training programs; local operator partnerships in expansion territories

Claims SkyGuard Will Not Make

We Will SayWe Will Not Say
"PPV of X% (95% CI: Y-Z%) on calibration sample""X% accuracy" without methodology
"Outcome improvement in N% of treated cohort""Eradicated CRB in treated area"
"Human-supervised automated targeting""Autonomous treatment"

Success Criteria

DimensionCriterion
Coverage≥95% of planned footprint per cycle
Processing<24 hours latency (P95)
Registry≥95% ID stability (3+ surveys)
DetectionPPV >50%, Recall >85% on calibration sample
Verification<15% missingness
AuditabilityEvidence packets complete for all treated palms

Appendix A: Canonical Definitions

These definitions are authoritative throughout the SkyGuard system. Terms are defined once here and not redefined elsewhere.

Core Entities

Palm

A single palm tree instance in the registry with: palm_id (stable UUID), geometry (crown polygon), first_observed, status (Active | Removed | Merged | Split).

Survey

A complete imaging collection over a defined footprint with: survey_id, footprint, captured_at, sensor_config, qa_status.

Risk and Decision

Risk Score

A calibrated likelihood score in [0.0, 1.0] representing the probability of active CRB damage requiring treatment. Generated by the fusion model from imaging-derived features. Calibrated on the monthly Stream A Audit sample; calibration is monitored for drift and recalibrated as needed.

Treatment Threshold

The risk score cutoff above which palms enter the treatment queue. Operator-configurable. Default: 0.6.

Holdout Period

Minimum time after treatment before a palm can be re-queued. Default: 90 days.

Detection Metrics

Positive Predictive Value (PPV)

The proportion of palms flagged as above-threshold that are confirmed positive by Level 4A ground-truth (RCI or physical inspection). Computed on the monthly Stream A Audit sample.

PPV = True Positives / (True Positives + False Positives)

Recall (Sensitivity)

The proportion of truly infested palms (confirmed by Level 4A ground-truth) that the model correctly flagged as above-threshold. Computed on the monthly Stream A Audit sample.

Recall = True Positives / (True Positives + False Negatives)

Ground-Truth Definition

A palm is "truly infested" if Level 4A verification (RCI or physical inspection with evidence package) confirms active CRB damage. Stream A Audit sample provides the denominator for both metrics.

Evidence Levels

LevelNameEvidenceUse
0RegisteredStable ID in registryBaseline existence
1FlaggedSingle detection signalEarly warning
2Above-ThresholdFusion score ≥ thresholdTreatment queue eligibility
3Trap-CorrelatedWithin active trap capture zoneSpatial validation
4AClose Inspection VerifiedRCI or physical inspection with evidence packagePPV/recall denominators
4BScreening ObservedBinocular/roadside onlySentinel trending (excluded from denominators)
5Lab VerifiedGenetic/taxonomic confirmationScientific corroboration

Outcome Categories

CategoryCriteriaInterpretation
ImprovedΔcomposite > +0.15 AND no new V-cutsTreatment effective
StableΔcomposite -0.10 to +0.15Inconclusive
DeclinedΔcomposite < -0.10 OR new V-cutsPossible failure
ConfoundedNon-CRB stressor identifiedExcluded from effectiveness
LostPalm not re-imagedMissing data

Delta Conventions

Δrisk = risk_T+90 - risk_pre (Negative = improvement)

Δcomposite = weighted combination (Positive = improvement)

Interpretation: Lower risk scores are better (reduced infestation likelihood). Higher composite scores are better (improved overall health). The sign conventions are opposite because risk measures "badness" while composite measures "goodness."

Appendix B: Technical Architecture

This appendix describes the system architecture at a level suitable for engineering estimation and implementation planning.

Design Principles

  1. Geospatial-first: Store and index data in PostGIS; design pipelines around spatial features
  2. Immutable, versioned artifacts: Each run (survey, model, risk scoring) must be reproducible and traceable
  3. Closed-loop governance: Program managers set thresholds; licensed applicators execute; all actions are auditable
  4. Evidence and metrics: Report denominators and missingness; avoid over-claiming
  5. Separation of concerns: Keep sensing, processing, scoring, mission packaging, execution and verification modular

System Topology

┌────────────────────────────────────────────────────────────────────────────┐ │ DATA PLANE │ ├────────────────────────┬────────────────────────┬──────────────────────────┤ │ Object Storage │ Spatial DB │ Event Log Store │ │ (S3 / MinIO) │ (PostGIS) │ (Kafka / Append-only) │ │ │ │ │ │ - Raw frames │ - Palm registry │ - Ingestion events │ │ - Orthomosaics │ - Observations │ - QA attestations │ │ - CHM products │ - Risk scores │ - Model version used │ │ - MS reflectance │ - Treatment queue │ - Threshold snapshots │ │ - Model artifacts │ - Missions │ - Batch approvals │ │ - Evidence crops │ - Exec logs │ - Execution traces │ │ │ - Constraints │ - Verification outcomes │ └────────────────────────┴────────────────────────┴──────────────────────────┘ │ ┌─────────────────────────────────────┴──────────────────────────────────────┐ │ COMPUTE PLANE │ ├────────────────────────┬────────────────────────┬──────────────────────────┤ │ Photogrammetry │ ML Inference │ Job Orchestrator │ │ (Pix4D / ODM) │ (GPU Workers) │ (Prefect / Dagster) │ │ │ │ │ │ - Ortho generation │ - Segmentation │ - Survey DAGs │ │ - CHM extraction │ - V-cut CNN │ - Batch scheduling │ │ - Georegistration │ - XGBoost fusion │ - Follow-up triggers │ │ - QA metrics │ - SHAP explain │ - Retraining jobs │ └────────────────────────┴────────────────────────┴──────────────────────────┘ │ ┌─────────────────────────────────────┴──────────────────────────────────────┐ │ SERVING PLANE │ ├────────────────────────┬────────────────────────┬──────────────────────────┤ │ REST/GraphQL API │ Ops Dashboard │ Model Registry │ │ (FastAPI) │ (React + Mapbox) │ (MLflow) │ │ │ │ │ │ - Palm queries │ - Coverage maps │ - Version control │ │ - Risk scores │ - Risk heatmaps │ - Calibration curves │ │ - Mission packages │ - Batch approval │ - Performance deltas │ │ - Evidence packets │ - Exec stats │ - Canary deployments │ │ - Audit exports │ - Drift alerts │ - A/B testing │ └────────────────────────┴────────────────────────┴──────────────────────────┘

Data Model (Core Tables)

-- Palm Registry
palm (palm_id UUID PK, geometry POLYGON, centroid POINT,
      first_observed TIMESTAMP, last_observed TIMESTAMP,
      status VARCHAR, cultivar VARCHAR, height_m FLOAT)

-- Survey Metadata  
survey (survey_id UUID PK, footprint POLYGON, captured_at TIMESTAMP,
        sensor_config JSONB, processing_ver VARCHAR, qa_status JSONB)

-- Per-Palm Observations
observation (obs_id UUID PK, palm_id FK, survey_id FK,
             features JSONB, risk_score FLOAT, shap_factors JSONB)

-- Treatment Operations
treatment_queue (queue_id UUID PK, threshold FLOAT, model_version VARCHAR)
mission (mission_id UUID PK, batch_id UUID, targets JSONB, route JSONB,
         constraints JSONB, approved_by VARCHAR, approved_at TIMESTAMP)
execution_log (log_id UUID PK, mission_id FK, flight_track LINESTRING,
               events JSONB, weather_snap JSONB, exceptions JSONB)

-- Verification & Audit
verification (palm_id FK, treatment_id UUID, t30/t60/t90_score FLOAT,
              outcome_class VARCHAR, composite_delta FLOAT)
audit_event (event_id UUID PK, timestamp, actor, action, object_id,
             payload JSONB, prev_hash VARCHAR, event_hash VARCHAR)

Pipeline Stages (Detailed)

Stage 1: Ingest

  • Upload raw frames to object storage with lifecycle rules
  • Extract EXIF/XMP metadata (GPS, camera settings)
  • Compute per-frame quality metrics (blur, exposure, coverage)
  • Build spatial footprint index
  • Record ingestion event with hash in audit log

Stage 2: Photogrammetry

  • Invoke containerized engine (Pix4D/ODM)
  • Generate orthomosaic with RTK georegistration
  • Extract CHM from structure-from-motion
  • Compute reflectance products (if MS enabled)
  • QA gates: coverage ≥95%, blur score, registration RMSE
  • Attestation report with pass/fail stored in event log

Stage 3: Palm Detection

  • Tile orthomosaic into 512×512 chips
  • Run U-Net segmentation (EfficientNet-B4 encoder)
  • Post-process: threshold, connected components, polygon simplification
  • Filter by area (2-100 m²) and circularity (>0.4)
  • Merge overlapping polygons across tile boundaries

Stage 4: Damage Classification

  • Extract 256×256 crown crops per detected palm
  • Run V-cut CNN classifier (ResNet-50 backbone)
  • Escalation: if confidence 0.4-0.8, retrieve raw frames for re-eval
  • Compute spectral indices (NDVI, NDRE, GNDVI, SAVI)
  • Aggregate per-crown statistics (mean, std, percentiles)

Stage 5: Risk Scoring

  • Assemble feature vector (~50 features): damage, spectral, temporal, spatial, structural
  • Run XGBoost inference
  • Apply isotonic calibration for probability alignment
  • Compute SHAP values for top-5 feature attribution
  • Store scores and explanations in observation table

Stage 6: Registry Matching

  • Spatial proximity matching (configurable threshold: 2-5m)
  • Shape signature comparison (Fourier descriptors)
  • Neighborhood graph consistency check
  • Height consistency validation from CHM
  • Handle merge/split cases with confidence scoring
  • Low-confidence matches flagged for manual review

Stage 7: Mission Generation

  • Filter: score ≥ threshold AND outside holdout period
  • Cluster nearby targets (DBSCAN or grid-based)
  • Route optimization (OR-Tools / OSRM)
  • Attach constraint snapshot (geofences, weather limits, credentials)
  • Generate mission JSON with schema version
  • Hold for program manager batch approval

Stage 8: Execution & Reconciliation

  • Applicator uploads flight track + application events via API/CLI
  • Spatial join: planned targets vs. executed positions
  • Compute success rate, flag deviations
  • Exception codes: ACCESS_DENIED, OBSTRUCTION, SAFETY_ABORT, WEATHER_ABORT
  • Push missed targets back to queue or schedule manual inspection

Stage 9: Verification

  • Schedule T+30/60/90 follow-up surveys automatically
  • Run change detection: pre vs. post imagery
  • Compute risk delta and composite delta
  • Apply outcome classification rules
  • Flag confounders (drought, storm, pruning, other pests)
  • Generate evidence packets per treated palm

Stage 10: Model Maintenance

  • Stratified sampling for ground-truth acquisition
  • Compute PPV, recall, calibration curves on sample
  • Monitor feature and score distribution drift
  • Automated retraining when drift exceeds threshold
  • Canary deployment: test new model on subset before rollout
  • Threshold adjustment guidance based on PPV/recall tradeoffs

Infrastructure Decision Trees

Cloud vs. On-Prem

If...Then...
Data sovereignty, offline operation, or predictable costs are criticalOn-prem: MinIO + self-hosted PostGIS
Agility, scalability, less maintenance preferredCloud: AWS S3, RDS, managed Kafka

RGB vs. Multispectral

If...Then...
Budget allows and early detection sensitivity is paramountProcure MS cameras from day one
Budget constrained or proving concept firstStart RGB, gather baseline, phase in MS later

Photogrammetry Software

If...Then...
Budget and support availableCommercial: Pix4D or Agisoft Metashape
Cost-constrained, engineering capacity availableOpen-source: OpenDroneMap

Orchestration Tool

If...Then...
Modern Pythonic interface, easier developmentPrefect or Dagster
Enterprise standards or existing Airflow infraApache Airflow

Security & Compliance Controls

  • Authentication: OAuth 2.0 / JWT for API access
  • RBAC: Program Manager, Data Lead, Field Operator roles
  • Encryption: TLS for transit, AES-256 for storage
  • Audit Trail: Hash-chained event log (Merkle tree optional)
  • Credential Checks: Applicator license validation before mission release
  • Data Retention: Raw imagery 12-24 months; processed data 5+ years; audit logs indefinite

AI Query Interface Architecture (Phase 2)

SkyGuard's architecture supports a local LLM for natural language data queries and narrative summary generation. This capability is planned for Phase 2 deployment after core pipeline validation.

System Flow

┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ User Query │────▶│ Local LLM │────▶│ Data Layer │────▶│ LLM Format │ │ (Natural │ │ (Intent + │ │ (SQL/API │ │ (Narrative │ │ Language) │ │ SQL Gen) │ │ Queries) │ │ Summary) │ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │ ┌──────┴──────┐ │ Export │ │ (PDF/CSV) │ └─────────────┘

Components

ComponentTechnologyFunction
LLM EngineLocal model (Llama 3, Mistral, or fine-tuned)Intent parsing, narrative generation
Query BuilderText-to-SQL from intentTranslate NL to database queries
Data GroundingQuery results injectionEnsure outputs reflect actual data
Report FormatterTemplates + LLM fillStructured output generation

Security Constraints

  • Query restrictions: Text-to-SQL limited to SELECT operations only; no INSERT/UPDATE/DELETE permitted
  • Authorized views: LLM queries execute against restricted database views, not raw tables
  • Audit logging: All LLM-generated queries logged with user context

Note: Core pipeline functionality operates independently. LLM interface enhances data accessibility but is not a critical path dependency.

Scaling Considerations

The architecture is designed to scale horizontally at key bottleneck points:

Photogrammetry (Stage 2)

Large surveys can be partitioned into spatial blocks for parallel processing. OpenDroneMap supports distributed mode; Pix4D offers cluster licensing. For Phase 1 pilot (~5,000 acres), single-node processing is sufficient. Parallelization path available when survey volume or frequency increases.

ML Inference (Stages 3-5)

Tile-based processing enables horizontal scaling across multiple GPU workers. The orchestrator can dispatch tiles to a worker pool and aggregate results. Current cost estimate assumes single GPU with burst capacity; additional workers can be added linearly as needed.

Database

PostGIS supports read replicas for query load distribution. Observation table can be partitioned by survey_id or time range if row counts exceed millions. Spatial indices on geometry columns are critical for registry matching performance.

Phase 1 target: ~5,000 acres (estimated 30K–70K palms), monthly surveys, <24h processing. Architecture supports 5-10× scale with horizontal additions, no redesign required.

Backup and Recovery

ComponentBackup StrategyRetention
PostGIS DatabaseDaily automated snapshots + continuous WAL archiving30 days snapshots, 7 days WAL
Object Storage (raw imagery)Cross-region replication (cloud) or RAID + offsite (on-prem)Indefinite (raw data is source of truth)
Model RegistryVersioned artifacts in object storageAll versions retained
Audit LogAppend-only with daily export to cold storageIndefinite (compliance requirement)
ConfigurationInfrastructure-as-code in version controlFull git history

Recovery Targets

  • RPO (Recovery Point Objective): <1 hour for database, <24 hours for imagery
  • RTO (Recovery Time Objective): <4 hours for full system restore from backups

Cost Estimation (~5,000 acres, monthly surveys)

Note: This estimate covers cloud infrastructure only, excluding software licenses. Infrastructure costs scale primarily with survey acreage, not palm count. See Appendix G for fully-loaded estimate including Pix4D, Mapbox, and higher storage assumptions.

ComponentSpecificationMonthly Cost
Compute (API + workers)4× c5.xlarge equivalent$600
GPU (ML inference)1× g4dn.xlarge (burst)$300
Databasedb.r5.large (PostGIS)$250
Storage2TB S3 + 100GB EBS$100
Event streamingManaged Kafka (if used)$150
NetworkModerate egress$50
Total Infrastructure~$1,450/month

Does not include survey flight operations, treatment operations, software licenses, or personnel.

Appendix C: Detection Pipeline

This appendix details the sensing, segmentation, and scoring components of the SkyGuard detection system.

Survey Specifications

RGB Imaging

ParameterSpecificationRationale
GSD2-3 cmSufficient for V-cut detection
Overlap80% front, 70% sideEnables SfM and frame redundancy
Altitude60-80m AGLBalance GSD vs. coverage rate
Speed8-12 m/sMinimize motion blur
Sun angle>30°Reduce harsh shadows

Multispectral (Optional Phase)

ParameterSpecificationRationale
BandsR, G, B, RE, NIRStandard vegetation indices
GSD5-8 cmSpectral signal priority
CalibrationReflectance panel pre/postTemporal comparability

Palm Crown Segmentation

Data Annotation

  • Select representative orthomosaics across zones and conditions
  • Label palm crowns using QGIS or dedicated labeling platform
  • Target: 12,000+ annotated crowns for initial training
  • Include edge cases: overlapping crowns, shadows, partial visibility

Model Architecture

Primary model: U-Net with EfficientNet-B4 encoder

ComponentSpecification
Input512×512 RGB tiles, 2-3 cm GSD
OutputBinary mask (crown/background)
AugmentationRotation, flip, color jitter, blur
LossDice + BCE weighted
Target IoU≥0.85 (bench-validated: 0.89 on internal labeled dataset)

Post-Processing Pipeline

  1. Threshold binary mask at 0.5
  2. Connected component labeling
  3. Contour extraction
  4. Polygon simplification (Douglas-Peucker)
  5. Filter by area: min 2m², max 100m²
  6. Filter by circularity: >0.4
  7. Merge overlapping polygons from adjacent tiles

V-Cut Detection (Damage Classification)

Training Data

  • Crown crops labeled: healthy, damaged, confounded
  • Confounded class includes: wind damage, nutrient stress, mechanical pruning
  • Target: 3,000+ labeled examples per class

Model Architecture

Primary model: CNN classifier (ResNet-50 backbone)

ComponentSpecification
Input256×256 crown crops, normalized
OutputV-cut probability [0, 1]
TrainingTransfer learning from ImageNet
ValidationCross-validation + early stopping

Escalation Protocol

When V-cut confidence is in uncertain range (0.4-0.8):

  1. Retrieve all raw frames overlapping crown centroid
  2. Select top-3 by: lowest blur, best sun angle, highest overlap
  3. Extract high-resolution crops from raw frames
  4. Re-run inference on raw crops
  5. Use max score if any crop >0.7; min score if all <0.3
  6. Otherwise flag for calibration sampling

Spectral Analysis

Vegetation Indices

IndexFormulaUse
NDVI(NIR - R) / (NIR + R)General health
NDRE(NIR - RE) / (NIR + RE)Chlorophyll stress (early)
GNDVI(NIR - G) / (NIR + G)Canopy density
SAVI(NIR - R) / (NIR + R + L) × (1 + L)Soil-adjusted

Per-Crown Aggregation

For each crown polygon, extract from raster:

  • Mean, standard deviation
  • 10th and 90th percentiles
  • Anomaly score vs. neighborhood baseline

Risk Score Fusion

Feature Vector (~50 features)

CategoryFeatures
DamageV-cut probability, count, extent ratio
SpectralNDVI/NDRE mean, std, percentiles, anomaly
TemporalNDVI delta, risk delta, days since observation
SpatialDistance to trap, trap count within 500m, zone prevalence
StructuralCrown area, circularity, height (CHM), area change

XGBoost Configuration

  • Objective: binary logistic
  • Calibration: isotonic regression post-hoc
  • Explainability: SHAP TreeExplainer
  • Output: calibrated probability [0, 1] + top-5 feature drivers

Performance Targets

MetricTargetStatus
Segmentation IoU≥0.85Bench-validated (0.89 on internal labeled dataset, n=2,400 tiles)
V-cut precision≥0.80Target
V-cut recall≥0.85Target
Fusion PPV≥50%Target
Fusion Recall≥85%Target
Calibration (Brier)<0.25Target
Processing latency<24 hoursTarget

Appendix D: Verification & Truth Budget

This appendix describes the outcome verification system and the calibration sampling strategy that provides biological anchoring for program claims.

The Verification Problem

Imaging-based detection and outcome tracking is self-referential: if the model is systematically wrong, outcomes classified by model outputs would not detect the error.

The Truth Budget solves this by allocating a bounded monthly investment in independent biological verification that provides ground truth for model calibration and outcome anchoring.

Operational Monitoring Cadence

Key insight: Verification is not a special follow-up activity—it happens automatically as part of the regular survey cadence.

How It Actually Works

ComponentReality
Survey frequency~30 days for entire footprint (with small variance for weather/scheduling)
Treatment trackingPer-palm: last_treatment_date, last_treatment_id
Outcome calculationWhen survey processes, compute days_since_treatment for each treated palm
Window bucketingClassify into T+30, T+60, T+90 windows based on actual elapsed time

Data Model

-- Per-palm treatment history
palm.last_treatment_date    -- most recent treatment timestamp
palm.last_treatment_id      -- links to treatment record
palm.treatment_count        -- total treatments received

-- When each survey processes:
FOR each palm WITH last_treatment_date:
  days_since = survey.date - palm.last_treatment_date
  
  IF days_since BETWEEN 25-40:  bucket = 'T+30'
  IF days_since BETWEEN 55-70:  bucket = 'T+60'  
  IF days_since BETWEEN 85-100: bucket = 'T+90'
  
  -- Compute outcome metrics at this observation point
  delta_risk = current_risk - pre_treatment_risk
  delta_composite = weighted_outcome_score(current, pre)

Why This Matters

  • No special flights: Verification piggybacks on routine surveys
  • Natural variance handled: Actual days (28, 32, 35) classified into windows
  • Continuous tracking: Every survey updates outcome status for all treated palms
  • Asset-level history: Full treatment timeline per palm, not just snapshots

Truth Budget Architecture

Three-Stream Allocation (~200 inspections/month)

StreamPurposeSelectionSize
A. AuditMeasure PPV, recall, calibrationStratified random100
B. Active LearningImprove model on hard casesNear-threshold, disagreements50
C. SentinelDetect hidden spreadBelow-threshold, fixed plots50

Remote Close Inspection (RCI)

Inspection drones capturing crown-level evidence. RCI is the default Level 4A verification modality.

RCI Independence Specification

RequirementSpecification
Separate flightNot the survey flight; dedicated RCI mission
Distinct geometry3-8m altitude (vs. 60-120m survey altitude)
Separate operatorInspector ≠ survey pilot
BlindingReviewer does not see risk score before classification

Outcome Classification

CategoryCriteriaInterpretation
ImprovedΔcomposite > +0.15 AND no new V-cutsTreatment effective
StableΔcomposite -0.10 to +0.15Inconclusive
DeclinedΔcomposite < -0.10 OR new V-cutsPossible failure or reinfest
ConfoundedNon-CRB stressor identifiedExcluded from effectiveness calc
Not Yet Observable<25 days since treatmentToo early to classify

Stratified Estimator Specification

  • Stratum Weights: Proportional to registry counts per stratum
  • Stream Treatment: Stream A for PPV/recall; B+C tracked separately
  • Variance: Per-stratum Wilson interval; linearized aggregation
  • Finite Population Correction: Applied when sample >5% of stratum

Appendix E: Data Governance

This appendix describes the data governance framework, audit trail architecture, and compliance controls that make SkyGuard operations defensible.

Why Auditability Is First-Class

Biosecurity operations face scrutiny from: regulators (aviation, pesticide), community stakeholders, program funders, interagency partners, media and public interest groups.

SkyGuard is designed so that every decision is traceable, every action is logged, and every claim is supported by documented evidence.

Immutable Event Logging

All operational events are recorded in an append-only log with hash chaining:

  • Survey ingestion and QA attestation
  • Registry match/create decisions
  • Model version used for scoring
  • Threshold snapshot used for queueing
  • Batch approval actions (who/when/what)
  • Mission package generation and release
  • Execution logs and reconciliation deltas
  • Verification outcomes and confounder flags

Implementation note: For Phase 1, the audit_event table with hash chaining provides the required immutability guarantees. Kafka or equivalent streaming infrastructure can be added in Phase 2 if real-time event processing or external integration requires it. The audit properties (append-only, hash-linked, tamper-evident) are preserved in either implementation.

Evidence Packets

Each treated palm produces a complete evidence packet containing:

  • Palm ID and registry history
  • Pre-treatment imagery and features
  • Risk score with SHAP explanation
  • Mission target record with constraints
  • Execution trace excerpt
  • T+30/60/90 verification outcomes

Data Integrity and Recovery

SkyGuard data is protected through layered backup and recovery mechanisms:

ComponentStrategyRecovery Target
Palm RegistryDaily DB snapshots + WAL archivingRPO <1 hour
Raw ImageryReplicated object storageNo data loss (source of truth)
Audit LogAppend-only with cold storage exportFull history preserved
Model ArtifactsVersioned in registryAll versions retained

Full backup and recovery specifications in Appendix B.

System Boundary

SkyGuard provides detection, targeting, and verification. The following remain operator responsibility:

AreaSkyGuard ProvidesOperator Responsible For
Land accessTarget coordinatesAccess permissions
Pesticide handlingApplication coordinatesProcurement, storage, disposal
Incident responseAudit trail, logsSpill response, complaints
LicensingCredential verificationMaintaining valid licenses

Appendix F: Validation Plan

This appendix provides detailed validation gates, evidence artifacts, and go/no-go criteria for each phase of SkyGuard deployment.

What "Ready" Means

SkyGuard is "ready for program use" when it produces decision-grade outputs on schedule, with known error bounds, and with defensible records.

Validation Phases

Phase 0: Bench Validation (4-8 weeks)

Exit criteria: Processing latency meets requirement; QA gates functional; audit log integrity verified.

Phase 1: Pilot Launch (Month 1-3)

Exit criteria: ≥95% coverage completion; processing latency stable; mission reconciliation within tolerance.

Phase 2: Verification Emergence (Month 4-6)

Exit criteria: Monthly outcome reports without manual rework; confounders documented.

Phase 3: Scale Proof (Month 7-12)

Exit criteria: Model improvement demonstrated; sustained operations with defined staffing; evidence quality supports public reporting.

Go/No-Go Decision Framework

Hard Stops

  • Audit log integrity failure
  • Unacceptable coverage gaps
  • Credential/compliance failure
  • Pipeline non-reproducibility

Performance Floors

  • PPV minimum: <30% sustained → Halt operations
  • Recall minimum: <70% sustained → Threshold adjustment required
  • Verification missingness: >25% → Coverage remediation required

Appendix G: Implementation Stack

This appendix provides a reference implementation architecture and sprint-based development plan for engineering estimation.

Component Selection Matrix

LayerComponentPrimary OptionAlternative
Object StorageRaw imagery, artifactsAWS S3MinIO (on-prem)
Spatial DatabaseRegistry, observationsPostgreSQL + PostGIS
Event LogAudit trailAppend-only SQL tableApache Kafka
OrchestrationPipeline DAGsPrefectDagster, Airflow
PhotogrammetryOrtho/CHM generationPix4DOpenDroneMap
ML FrameworkSegmentation, CNNPyTorch
Risk ScoringFusion modelXGBoostLightGBM
ExplainabilityFeature attributionSHAP
Model RegistryVersioning, trackingMLflowW&B
API FrameworkREST endpointsFastAPIFlask
FrontendDashboardReactVue
MappingSpatial visualizationMapbox GLLeaflet
RoutingMission optimizationOR-ToolsOSRM
Local LLM (Phase 2)Natural language interfaceLlama 3 / Mistral (fine-tuned)Phi-3, Qwen
LLM Serving (Phase 2)Inference runtimevLLM / llama.cppOllama, TGI
Report GenDocument exportWeasyPrintReportLab

Development Sprints (2-week cycles)

Sprint 0: Kickoff & Design

  • Stakeholder workshops with municipal partners, pilots, applicators
  • Sensor selection: RGB vs. multispectral decision
  • Draft flight SOPs (altitude, speed, overlap, geofencing)
  • Infrastructure selection: cloud vs. on-prem
  • Define naming conventions for survey IDs, model versions
  • Sketch initial database schema

Sprint 1: Data Plane Setup

  • Provision object storage with access policies and lifecycle rules
  • Deploy PostgreSQL + PostGIS; create base tables
  • Implement event log (append-only table or Kafka)
  • Write ingestion scripts (upload + log events)
  • Configure automated backups and restoration runbook

Sprint 2: Photogrammetry Pipeline

  • Containerize photogrammetry engine (Docker)
  • Build ortho generation wrapper script
  • Implement QA metrics: blur, exposure, coverage gaps
  • Create attestation logging for audit trail
  • Define reflight vs. accept thresholds

Sprint 3: Palm Detection

  • Data annotation: label palm crowns in QGIS
  • Train U-Net segmentation model (PyTorch)
  • Package model for inference (TorchScript/ONNX)
  • Build inference pipeline → crown polygons → PostGIS
  • Implement registry insertion service

Sprint 4: Damage Classification

  • Create labeled dataset: healthy / damaged / confounded
  • Train CNN classifier (ResNet-50, transfer learning)
  • Implement spectral index computation (GDAL/Rasterio)
  • Integrate features into unified per-palm feature set

Sprint 5: Risk Scoring

  • Assemble ~50 feature vector per palm
  • Train XGBoost fusion model
  • Apply isotonic calibration
  • Integrate SHAP for explainability
  • Implement configurable threshold + holdout logic

Sprint 6: Registry Matching

  • Develop matching algorithm (spatial + shape + spectral)
  • Handle merge/split cases with confidence scoring
  • Implement registry update service with event logging
  • Create unit tests with synthetic edge cases

Sprint 7: Mission Generation

  • Implement clustering (DBSCAN / grid-based)
  • Integrate routing optimization (OR-Tools)
  • Define mission JSON schema with constraints
  • Build mission generator service
  • Create approval API endpoints

Sprint 8: API & Dashboard

  • Design REST API (FastAPI) with OpenAPI spec
  • Implement OAuth/JWT authentication + RBAC
  • Build React frontend with routing and state
  • Integrate Mapbox for spatial visualization
  • Create approval screens and threshold controls

Sprint 9: Execution Logging

  • Define execution log schema (GPS track, events, weather)
  • Build upload API + CLI tool for applicators
  • Implement reconciliation service (planned vs. actual)
  • Create exception handling workflow
  • Add execution stats to dashboard

Sprint 10: Verification & Outcome Tracking

  • Add treatment tracking fields to palm table (last_treatment_date, last_treatment_id)
  • Implement days_since_treatment calculation on each survey run
  • Build window bucketing logic (T+30/60/90 based on actual elapsed days)
  • Implement change detection: pre-treatment vs. current observation
  • Build outcome classifier with category rules
  • Add confounder detection (drought, storm, pruning flags)
  • Generate evidence packets per treated palm
  • Track "Not Yet Observable" status for recent treatments

Sprint 11: Model Maintenance

  • Design stratified sampling for ground-truth
  • Build PPV/recall/calibration dashboards
  • Implement drift monitoring with alerts
  • Automate retraining pipeline with MLflow
  • Create canary deployment workflow

Sprint 12: Governance Hardening

  • Implement hash-chained immutable audit log
  • Finalize RBAC across all endpoints
  • Compliance review with legal counsel
  • Define data retention policies and scripts
  • Document incident response procedures

Sprint 13: AI Query Interface (Phase 2)

Phase 2 enhancement—not required for core pipeline operations. Schedule after Phase 1 validation.

  • Select and deploy local LLM (Llama 3 / Mistral, 7B-13B range)
  • Set up inference server (vLLM or llama.cpp with GPU/CPU quantization)
  • Build text-to-SQL layer for common query patterns (SELECT-only)
  • Implement data grounding: inject query results into prompt
  • Create narrative templates for standard summaries (zone, treatment, outcome)
  • Implement query security: authorized views, audit logging
  • Build export pipeline (narrative → PDF/CSV)
  • User acceptance testing with program managers

Deployment Configurations

Docker Compose (Development)

services:
  db:
    image: postgis/postgis:15-3.3
    volumes: [pgdata:/var/lib/postgresql/data]
    
  minio:
    image: minio/minio
    command: server /data --console-address ":9001"
    
  api:
    build: {context: ., dockerfile: Dockerfile.api}
    depends_on: [db, minio]
    
  worker:
    build: {context: ., dockerfile: Dockerfile.pipeline}
    deploy:
      resources:
        reservations:
          devices: [{capabilities: [gpu]}]

Kubernetes (Production)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: skyguard-api
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: api
        image: skyguard/api:latest
        resources:
          requests: {memory: 512Mi, cpu: 500m}
          limits: {memory: 1Gi, cpu: 1000m}

Cost Estimation (Fully Loaded)

Note: This estimate includes software licenses and higher storage/network assumptions. See Appendix B for cloud-infrastructure-only baseline.

CategoryItemMonthly
InfrastructureCompute (API + workers)$600
GPU inference - ML models (burst)$300
Database (PostGIS)$250
Storage + Network$300
SoftwarePix4D (if commercial)$350
Mapbox / monitoring$100
Total Infrastructure (Phase 1)~$1,900/month
Phase 2Local LLM inference (add)+$150

Note: LLM query interface is Phase 2. Phase 1 focuses on core detection/treatment/outcome pipeline.

Build vs. Buy Summary

Core IP (Build In-House)

  • Registry persistence and matching logic
  • Risk scoring + threshold decision contract
  • Verification and outcome classification
  • Compliance-by-design mission packaging
  • Immutable audit trail architecture
  • Calibration sampling and drift monitoring

Commodity (Buy / Integrate)

  • Photogrammetry engine
  • Cloud infrastructure
  • Mapping and visualization
  • Routing optimization libraries

Appendix H: References

This appendix provides references supporting background claims and technical approaches.

CRB Biology and Impact

Bedford, G.O. (1980). Biology, ecology, and control of palm rhinoceros beetles. Annual Review of Entomology, 25(1), 309-339.

Gressitt, J.L. (1953). The coconut rhinoceros beetle with particular reference to Palau Islands. Bernice P. Bishop Museum Bulletin, 212.

Hawaiʻi Department of Agriculture (HDOA). CRB Response Program materials (various).

Remote Sensing and Tree Detection

Weinstein, B.G., et al. (2019). Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks. Remote Sensing, 11(11), 1309.

Osco, L.P., et al. (2020). A review on deep learning in UAV remote sensing. International Journal of Applied Earth Observation and Geoinformation, 102, 102456.

Machine Learning Methods

Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of KDD, 785-794.

Lundberg, S.M., & Lee, S.I. (2017). A unified approach to interpreting model predictions. NeurIPS, 30.

Ronneberger, O., et al. (2015). U-Net: Convolutional networks for biomedical image segmentation. MICCAI, 234-241.

Software and Tools

ToolUseReference
PyTorchDeep learningpytorch.org
XGBoostGradient boostingxgboost.readthedocs.io
SHAPExplainabilityshap.readthedocs.io
PostGISSpatial databasepostgis.net
FastAPIAPI frameworkfastapi.tiangolo.com

Market Research and Policy Sources

Hawaii Invasive Species Council (2025). Coconut Rhinoceros Beetle (CRB) Eradication Battle Plan. dlnr.hawaii.gov/hisc

Hawaii Department of Agriculture and Biosecurity. HB643 Testimony, FY2025. capitol.hawaii.gov

Civil Beat (2025). Hawaiʻi Primes Itself to Battle Biosecurity Threats. civilbeat.org

Civil Beat (2024). Disease Could Kill Most of Hawaii's ʻŌhiʻa Forests in 20 Years. civilbeat.org

U.S. Congress Joint Economic Committee (2024). How the Renewed Compacts of Free Association Support U.S. Economic, National Security, and Climate Goals. jec.senate.gov

U.S. Department of the Interior. Office of Insular Affairs—Compacts of Free Association. doi.gov/oia

USDA NIFA. SBIR/STTR Program Information, FY2025-2026. nifa.usda.gov

CRB Hawaii. The CRB Response; 10 Years with CRB in Hawaii. crbhawaii.org

Governor Josh Green, M.D. (2024). News Release on Biosecurity Funding for Dept. of Agriculture. governor.hawaii.gov

Honolulu Parks and Urban Forestry

City & County of Honolulu DPR. About Parks and Recreation (4,986 acres, 403 facilities). honolulu.gov/dpr

City & County of Honolulu DPR. Division of Urban Forestry (~100,000 street and park trees). honolulu.gov/dpr

Smart Trees Pacific. Citizen Foresters program; Hawaii Forest Action Plan gap analysis. smarttreespacific.org

Honolulu Office of the City Auditor (2020). Audit of DPR Expenditures and Operations.


This reference list will be expanded as the program develops.

Appendix I: Beachhead Customer Profile & Build Estimate

This appendix provides a concrete build estimate for an early adopter deployment, demonstrating that SkyGuard has been scoped to implementation-ready detail.

Beachhead Customer: City & County of Honolulu

AttributeValue
Coverage Area~5,000 acres of municipal park land (DPR manages 4,986 acres across 403 facilities)
Estimated Palm Count30,000–70,000 palms (working estimate; see note below)
Survey CadenceMonthly during active season
Budget AuthorityMunicipal Parks & Recreation + Emergency Management

Note on palm count: Hawaii lacks a comprehensive palm census—a gap explicitly identified by the Hawaii Forest Action Plan and Smart Trees Pacific. The CRB Response program has surveyed 95,000+ palms across Oahu (including private land, resorts, and military installations), but no species-specific inventory exists for municipal land. Establishing an accurate baseline is a core deliverable of the pilot surveillance program. This data gap strengthens the value proposition: SkyGuard creates the authoritative palm registry that currently does not exist.

Why This Customer First

  • Regulatory pressure: State and federal scrutiny on CRB containment
  • Political visibility: Palm loss in public parks generates constituent complaints
  • Budget capacity: Municipal government with dedicated pest management funding
  • Reference value: Success here validates SkyGuard for other Pacific jurisdictions (Guam, CNMI, American Samoa)
  • Existing relationships: Tethra Systems is Hawaii-based with local credibility
  • Data gap opportunity: No authoritative palm inventory exists—pilot creates foundational asset

What "Built" Means for This Contract

A minimum defensible product requires the full closed-loop spine:

  1. Ingest survey imagery into immutable, auditable data plane
  2. Photogrammetry pipeline with QA attestations
  3. Palm detection and persistent registry
  4. Damage classification and feature extraction
  5. Risk scoring with calibration and explainability
  6. Registry matching across survey cycles
  7. Treatment mission generation with approvals
  8. Execution logging and reconciliation
  9. Ops dashboard for coverage, risk maps, and approvals

Verification and outcome tracking (T+30/60/90 windows) is built in parallel but outcomes naturally lag treatment by 90+ days.

Build Timeline

Milestone A: Treatment-Capable MVP (~20 weeks)

Monitoring → risk queue → missions → execution reconciliation → dashboard

SprintDurationDeliverable
0: Kickoff & Design2 weeksSystem design, schema, naming conventions
1: Data Plane2 weeksObject storage + PostGIS + event log + ingestion
2: Photogrammetry2 weeksContainerized pipeline, QA metrics, attestation logging
3: Palm Detection2 weeksU-Net segmentation, registry insertion service
4: Damage Classification2 weeksCNN classifier, spectral indices, unified features
5: Risk Scoring2 weeksXGBoost fusion, calibration, SHAP, threshold config
6: Registry Matching2 weeksCross-cycle matching, merge/split handling
7: Mission Generation2 weeksClustering, routing, mission schema, approvals
8: API & Dashboard2 weeksFastAPI + React/Mapbox + approval workflows
9: Execution Logging2 weeksUpload API, reconciliation, exception handling

Result: Process surveys, produce prioritized treatment queue, generate missions, capture execution logs, present in dashboard.

Milestone B: Program-Grade Robustness (+6 weeks)

SprintDurationDeliverable
10: Verification2 weeksT+30/60/90 windows, outcome tracking, evidence packets
11: Model Maintenance2 weeksDrift alerts, calibration dashboards, retraining workflow
12: Governance2 weeksHash-chained audit log, RBAC finalization, compliance docs

Important: Engineering completes at ~26 weeks, but outcome data (T+90 windows) matures ~3 months after first treatment cohorts.

Build Cost Estimate

Staffing Plan (20-26 weeks)

RoleAllocationDurationFTE-Months
Tech Lead / Backend Engineer1.06 months6
Geospatial / Data Engineer0.75–1.05 months4–5
ML Engineer (Computer Vision)1.06 months6
Frontend (React/Mapbox)1.04–5 months4–5
DevOps / SRE0.56 months3
QA / Test Automation0.54 months2
Product / Delivery (TPM)0.56 months3
Total28–30

Cost Summary

CategoryLowHighNotes
Core Engineering Labor$500K$750K28-30 FTE-months @ $18-25K/month
Contingency (15-30%)$75K$225KIntegration, data issues, compliance iterations
Data Labeling$10K$40KCrown annotations + damage classes
GPU / Compute (training)$10K$60KTraining runs + pipeline testing
Tooling / Licenses$5K$15KPix4D, Mapbox during build
Total Build Cost$600K$1.0MMost likely: $700K–$850K

Note: This is one-time build cost. Monthly platform operations are ~$1,900/month (see Appendix G).

Operational Scaling Path

Platform development scales differently than field operations. As contracts expand beyond the initial pilot:

  • Survey operations: Additional drone pilots and equipment for increased coverage area or frequency
  • Ground operations: Treatment crews scale with treatment volume, not platform complexity
  • Expansion territories: Local operator partnerships (rather than direct staffing) for Guam, CNMI, and COFA deployments
  • SOPs and training: Standardized procedures and certification programs enable consistent quality across operators

The platform architecture scales horizontally with infrastructure; field operations scale through partnerships and standardized training.

Validation Milestones

Build-Measure-Learn checkpoints for falsifiable progress:

WeekMilestoneHypothesis Tested
8End-to-end survey processingTechnical risk: Can we reliably process real survey data through detection?
12Risk model vs. expert judgmentValue hypothesis: Does the model flag the same palms experts would prioritize?
20Mission approval by customerCustomer validation: Will Honolulu approve treatment missions generated by this system?
Month 6+Outcome data from T+90 windowsImpact hypothesis: Do treated palms show measurably better outcomes than untreated?

Technical Risk Mitigations

The following risks have been identified with specific mitigations built into the sprint plan:

1. Photogrammetry Variability

Risk: Ortho/CHM quality inconsistency causes downstream ML instability.

Mitigation: QA metrics + accept/reflight thresholds + attestation logging in Sprint 2. Store products as Cloud-Optimized GeoTIFFs (COGs) with radiometric calibration versioning. Create "golden survey" regression artifacts for pipeline change validation.

2. Registry Identity Stability

Risk: Unstable palm IDs across survey cycles makes treatment history and outcomes untrustworthy.

Mitigation: Registry matching produces a match graph (edges with confidence + reasons), not just hard assignment. Low-confidence merges/splits route to human resolution queue without blocking pipeline.

3. Model Defensibility

Risk: Municipal stakeholders challenge "why did we treat this tree."

Mitigation: Isotonic calibration + SHAP explainability in Sprint 5. Log model version + feature vector hash + SHAP summary into audit trail for each queued palm. Versioned thresholds with governance controls.

4. Dashboard Performance

Risk: Mapbox rendering degrades with tens of thousands of palms.

Mitigation: Serve palms as vector tiles or pre-aggregated grid layers. PostGIS indices aligned to common query patterns ("high-risk this week", "treated last 30 days").

5. Governance and Security

Risk: City contracts require audit-grade security controls.

Mitigation: Audit-log primitives in Sprint 1–2, hash-chaining in Sprint 12. Every mutation path idempotent and event-logged from day 1. OAuth/JWT + RBAC + TLS + AES-256 at rest.

Speed-to-Market Levers

  • Skip Phase 2 LLM: Not required for core operations; schedule after validation
  • Cloud-first delivery: Docker Compose (dev) + Kubernetes (prod) avoids procurement delays
  • Parallelize early: Data labeling + dashboard build start Week 1 while data plane is built
  • Define success gates early: Use paper's metrics (coverage, latency, registry stability, PPV/recall)

This estimate demonstrates that SkyGuard has been scoped to implementation-ready detail. The beachhead customer provides a concrete validation target for the platform's core value propositions.