Drone_VR
NSF-funded VR training platform for Innocon MiniFalcon drone assembly
Overview
Drone VR is a virtual reality training platform that teaches unmanned aircraft systems (UAS) students how to unpack, assemble, and assess flight readiness of an Innocon MiniFalcon drone. The physical MiniFalcon is nearly 11.5 feet long with a 16.5-foot wingspan and costs approximately $600,000, making hands-on training impractical at scale. This VR platform provides a cost-effective alternative, funded by the National Science Foundation (NSF) through the National Center for Autonomous Technologies (NCAT).
Version 1.0 established the foundational VR interaction system using Unity 2022 LTS and XR Interaction Toolkit on Meta Quest. It implemented socket-based assembly mechanics where drone parts snap into correct positions, validated by an object identity matching system. The project followed a white-boxing pipeline, prototyping interactions with placeholder geometry before integrating production 3D models of the drone crate, hangar, and assembled drone.
Non-Technical Summary
Drone VR is a virtual reality application that teaches students how to assemble a military-grade drone. The Innocon MiniFalcon drone is large, nearly 12 feet long with over a 16-foot wingspan, and costs about $600,000, which makes it impractical for students to practice with the real thing. This VR platform lets students put on a headset and practice the full assembly process in a virtual hangar, learning each step without any risk to expensive equipment.
In this first version, students can pick up drone parts with their VR controllers and place them into the correct positions on the drone frame. The system checks that each part is in the right spot before accepting it, teaching proper assembly order. The project includes a tutorial to help new VR users get comfortable with the controls before starting the assembly.
The project was funded by the National Science Foundation and developed at Shenandoah University as part of the National Center for Autonomous Technologies program.
Quick Highlights
- Replaces a $600K physical drone with an immersive VR training experience
- NSF-funded through the National Center for Autonomous Technologies
- Socket-based snap-to-place assembly mechanics on Meta Quest
- Object identity validation ensures only correct parts fit in each position
- White-boxing pipeline for rapid interaction prototyping
- Multi-scene architecture: Main Menu, Tutorial, and Assembly
Technical Breakdown
XR Interaction Layer
Built on Unity's XR Interaction Toolkit 2.5.2 with OpenXR as the runtime. The XR Origin is configured as a prefab with XRDirectInteractor components on each hand. Interaction layers separate grab interactions from UI interactions. Physics is configured for immediate movement type on grab interactables.
Socket-Based Assembly System
Drone assembly uses XRSocketInteractor components as target positions for each drone part. The ObjectIdentifierForSockets script acts as a proximity trigger: when an interactable enters the socket's trigger collider, it checks the object's ObjectIdentityForInteractables component against its required identity string. Only matching parts activate the socket for snapping.
Model Pipeline
3D models are organized into prefabs: DroneCrate_Model, Drone_Combined_Model, DroneHangar_Combined_Model, and Drone_Silhouette_Combined_Model. Prefab variants extend base models with interactable components. The white-boxing phase used placeholder geometry with custom materials to prototype interactions before final model integration.
Scene Architecture
Three scenes manage the application flow: MainMenu_Scene (scene index 0), DroneAssembly_Scene (index 1), and Tutorial_Scene (index 2). The MainMenuHandler manages scene loading via hard-coded scene indices. URP (Universal Render Pipeline) handles rendering with baked lighting for the hangar environment.
Systems Used
- Socket-Based Assembly System: XR socket interactors validate drone part placement using identity matching and proximity triggers
- Object Identity Matching: Tag-based identification system ensuring only correct drone parts can be placed in specific sockets
- Multi-Scene Architecture: Separate scenes for main menu, tutorial, and drone assembly with scene-index-based loading
- White-Boxing Prototyping Pipeline: Rapid prototyping workflow using placeholder geometry before final drone model integration
Impact & Results
- Eliminated the need for students to interact with a $600,000 physical drone for assembly training
- Enabled Shenandoah University's UAS program to offer hands-on assembly training at scale
- Established the VR training foundation that would be expanded into a full assessment platform in v2.0
- Demonstrated viability of VR-based drone training to NSF/NCAT stakeholders
Deep Dive
Project Origins and Motivation
The project began in March 2023 at Shenandoah University, funded by the National Science Foundation through the National Center for Autonomous Technologies (NCAT). The goal was straightforward: UAS students needed to learn how to unpack and assemble an Innocon MiniFalcon drone, but the physical drone cost $600,000 and was impractical to use for routine training. VR offered a scalable, risk-free alternative.
White-Boxing Phase
Development started with a white-boxing approach, using primitive geometry (cubes, cylinders) to prototype all interactions before importing production models. This let the team iterate quickly on grab mechanics, socket placement, and interaction flow without waiting for 3D assets. Custom materials were created for the white-boxed drone crate, latches, and drone body. Once interactions were validated, production models were imported: the drone crate, assembled drone, hangar environment, and drone silhouettes for placement guides.
XR Interaction Design
The core interaction model uses Unity's XR Interaction Toolkit with direct interactors (hand-based grabbing). Parts are XRGrabInteractable components with immediate movement type for responsive feel. Assembly positions are XRSocketInteractor components placed at each attachment point on the drone frame. The ObjectIdentifierForSockets script bridges identity validation: it sits on a trigger collider around each socket and checks incoming objects for their ObjectIdentityForInteractables component. Only objects with matching identity strings activate the socket, preventing incorrect assembly.
Scene Structure
The application is organized into three Unity scenes. The Main Menu scene provides buttons to launch the simulation or tutorial. The Tutorial scene introduces VR controls and interaction basics. The DroneAssembly scene contains the virtual hangar with the drone crate, assembly area, and all interactable parts. Scene transitions use Unity's SceneManager with hard-coded build indices.
Rendering and Environment
The project uses Universal Render Pipeline (URP) optimized for mobile VR on Meta Quest. Lighting is baked for the hangar environment to maintain performance on the Quest's mobile Snapdragon XR2 chipset. The Drone_Silhouette_Combined_Model provides ghost outlines at assembly positions, giving students visual guides for where each part belongs.
Collaboration
Version 1.0 was co-developed by Jacob Eisenhart and Michael Reynolds, with the majority of commits (669 by NocturnalProgress / Michael Reynolds) focused on the interaction system, prefab architecture, and scene configuration.
vv1 — v1.0 — Initial Assembly Trainer
vv1 Overview
Drone VR is a virtual reality training platform that teaches unmanned aircraft systems (UAS) students how to unpack, assemble, and assess flight readiness of an Innocon MiniFalcon drone. The physical MiniFalcon is nearly 11.5 feet long with a 16.5-foot wingspan and costs approximately $600,000, making hands-on training impractical at scale. This VR platform provides a cost-effective alternative, funded by the National Science Foundation (NSF) through the National Center for Autonomous Technologies (NCAT).
Version 1.0 established the foundational VR interaction system using Unity 2022 LTS and XR Interaction Toolkit on Meta Quest. It implemented socket-based assembly mechanics where drone parts snap into correct positions, validated by an object identity matching system. The project followed a white-boxing pipeline, prototyping interactions with placeholder geometry before integrating production 3D models of the drone crate, hangar, and assembled drone.
vv1 Technical Breakdown
XR Interaction Layer
Built on Unity's XR Interaction Toolkit 2.5.2 with OpenXR as the runtime. The XR Origin is configured as a prefab with XRDirectInteractor components on each hand. Interaction layers separate grab interactions from UI interactions. Physics is configured for immediate movement type on grab interactables.
Socket-Based Assembly System
Drone assembly uses XRSocketInteractor components as target positions for each drone part. The ObjectIdentifierForSockets script acts as a proximity trigger: when an interactable enters the socket's trigger collider, it checks the object's ObjectIdentityForInteractables component against its required identity string. Only matching parts activate the socket for snapping.
Model Pipeline
3D models are organized into prefabs: DroneCrate_Model, Drone_Combined_Model, DroneHangar_Combined_Model, and Drone_Silhouette_Combined_Model. Prefab variants extend base models with interactable components. The white-boxing phase used placeholder geometry with custom materials to prototype interactions before final model integration.
Scene Architecture
Three scenes manage the application flow: MainMenu_Scene (scene index 0), DroneAssembly_Scene (index 1), and Tutorial_Scene (index 2). The MainMenuHandler manages scene loading via hard-coded scene indices. URP (Universal Render Pipeline) handles rendering with baked lighting for the hangar environment.
vv2 — v2.0 — Unity 6 Migration & Architecture Overhaul
vv2 Overview
Version 2.0 represents a complete architecture overhaul of the Drone VR training platform, migrated to Unity 6 with a fully data-driven assembly system. The previous version's simple socket-based interactions were replaced with a configurable step progression engine built on ScriptableObject definitions, supporting three distinct training modes: Learning (full guidance with always-visible hints and outlines), Practice (hints available on demand), and Assessment (timed evaluation with no hints).
The XR interaction layer was rebuilt from the ground up with custom components. XRConfigurableInteractable extends XRGrabInteractable with multi-condition completion tracking, keychain integration, and two-handed interaction support, while XRConfigurableSocketInteractor adds PreInstallation, Installation, and Removal states with ghost visualization control. A Key/Lock constraint system ensures only authorized object combinations can interact, preventing incorrect assembly sequences.
The platform now includes automated performance tracking with dual-stopwatch timing (per-step and full-assembly), JSON data export for student analytics, visual affordance feedback through material lerping, and a world-space tooltip system for contextual guidance.
vv2 Technical Breakdown
Configurable Assembly Step System
The core of v2.0 is the AssemblyStepProcessHandler, a state machine that manages progression through an ordered list of AssemblyStepConfiguration objects. Each configuration references an AssemblyStepScriptableObject (containing stepTitle, stepInstructions, stepHint) and a list of AssemblyStepHandler components. The process handler tracks previous, current, and next step indices and orchestrates configuration, completion checking, and advancement. Steps are configured with a delay via ConfigureAssemblyStep() and advance via GoToNextAssemblyStep(). A hint system with OnHintButton() enables silhouettes, tooltips, and affordances progressively.
Custom XR Interaction Components
XRConfigurableInteractable extends XRGrabInteractable and implements IKeychain. It supports multi-condition completion evaluation through three condition layers: Primary (SelectEntered, SelectEnteredAndExited, SelectExited), Secondary (EnteredHover, ExitedHover), and Trinary (EnteredXRSocket, ExitedXRSocket). The component tracks selecting interactors for two-handed validation, manages rigidbody freeze/unfreeze during interaction, and resets objects to starting position on floor collision. Each interactable holds a list of Key ScriptableObjects as its keychain.
XRConfigurableSocketInteractor extends XRSocketInteractor with three interaction states via ConfigurableSocketInteractorType: PreInstallation, Installation, and Removal. It tracks socket interaction completion and controls ghost mesh visualization based on scene settings. Sockets are wrapped in XRConfigurableSocketCollection which tracks availability and completion state alongside the socket's MeshRenderer reference.
Key/Lock Constraint System
The IKeychain interface defines Contains(Key key). Keychain implements it with a HashSet for O(1) lookup. Lock holds required keys and validates via CanUnlock(IKeychain), requiring ALL keys present. XRLockSocketInteractor overrides CanHover() and CanSelect() to validate the interactable's keychain against the socket's lock before allowing interaction. This enforces assembly order: parts gain keys as earlier steps complete.
Multi-Mode Training via ScriptableObjects
SceneSettingsScriptableObject defines per-mode configuration: allowAssemblyStepTiming, allowObjectAssemblyTiming, trackAssemblyStepCompletion, showAssemblyInstructions, plus enums for HintOption (NoHints, AlwaysShowHints, AllowPromptedHints), OutlineOption, HapticsOption, and FlawType (RandomFlaw, StructuralFlaw, ElectricalFlaw, NoFlaws). SettingsHandler routes these to the appropriate handlers and ReportHandler.
Assessment and Timing
TimingHandler manages two Stopwatch instances, one for full assembly duration and one for the current step. ReturnStopwatchSpan() captures elapsed time and optionally restarts. AssemblyAssessmentHandler validates completion within a configurable time limit (15-120 minutes) and triggers data export via StartAssemblyAssessment().
Data Export Pipeline
DataExportHandler is the base class handling file path preparation via Application.persistentDataPath, folder creation, and filename generation with numeric suffixes for duplicates. ReportHandler extends it, accumulating an AssemblyLog object with lists for sceneSettings, userActions, assemblyStepTiming, and assemblyStepCompletion. On OnApplicationQuit() or explicit export call, it serializes to JSON via Newtonsoft.Json.
UI and Visual Feedback
UIHandler manages multiple Canvas elements, updating instruction displays from AssemblyStepScriptableObject data and toggling mode-specific UI. AffordanceHandler provides hover feedback via coroutine-driven material lerping between original and hover materials. TooltipHandler positions world-space Canvas tooltips above interactables with billboard rotation facing the player camera.