
Project CORE (Capture–Operate–Render–Execute): A Low-Level Distributed Architecture for Immersive and Robotic Intelligence
The project aims to develop a foundational architecture for immersive and robotic systems, focusing on distributed real-time data capture, low-latency decentralized computation, and dynamic multimodal output generation. The architecture will support collaborative AR/VR environments, human-robot teaming, and ambient computing networks. Key features include ultra-low latency, low power design, and a hardware-agnostic input layer that accepts data from various sources such as sensors, wearables, and edge cameras. The modular compute layer will perform logic, physics, and agent-based reasoning on distributed nodes, while the flexible output layer will generate spatial data, 3D geometry, synthesized video, or action commands. The system will also incorporate AI-enhanced configuration using LLMs and orchestrator agents to dynamically assign tasks and balance loads. Use of existing computing architecture like the Robotic Operating System (ROS) will be explored and considered

Dynamic Scene Capture and Rendering for AR/VR
Project Overview: This research and development internship aims to explore and prototype real-time 3D scene reconstruction and rendering pipelines based on the latest advancements in Gaussian Splatting (GS) techniques. Interns will investigate and integrate modern methods such as MOsTR3R (for dynamic scene modeling), 4D-TAM (for dynamic temporal fusion), and others (e.g., Neuralangelo, 3DGS, Gaudi, Splatting Transformers). Interns will capture both static and dynamic world geometry (using RGB-D or video input) and render these using multi-view, photo-realistic, and time-coherent Gaussian Splatting representations , ultimately generating interactive 3D experiences.

Re-Architecting ARCortex's Unity-Based AR Platform
ARCortex’s immersive AR/VR platform, originally built in Unity with multi-API integration, provided a modular foundation for geospatial, multi-user AR experiences. However, rapid advances in AI, no-code tools, and agent-based computing now enable a paradigm shift. This project aims to re-architect the platform from the ground up , leveraging new methods that drastically reduce human development effort while increasing creative flexibility, scalability, and runtime adaptability. The redesign will explore: Model Context Protocol (MCP) and A2A protocol for agent coordination and composability No-code and low-code interfaces for rapid prototyping and domain expert use Computer-use agents for intelligent orchestration of world-building, asset generation, and behavior scripting On-the-fly world and video generation , using generative AI to create immersive environments and scenes without manual modeling Emerging physics AI models to simulate realistic interactions without manual tuning Evaluation of scene-based vs. video-based authoring workflows to optimize for performance and storytelling Adoption of dynamic runtime systems to load assets and behaviors based on context, reducing app size and increasing flexibility Deliverables: Reimagined architecture design based on agents, generative AI, and modular execution Prototype implementation integrating no-code pipelines, AI-driven scene generation, and cloud/edge execution Evaluation report on scene-based vs. video-based experiences, including authoring time and user engagement metrics Demonstration system showing an end-to-end workflow: from high-level user prompt to deployed immersive AR/VR experience Security and performance audit to validate new architecture for real-world deployment MCP-compliant system design enabling distributed component management and AI-driven orchestration

Augmented Reality-focused marketing strategy and implementation
ARCortex, a pioneer in Augmented Reality with over 30 years of cross-sector expertise, seeks to modernize its marketing and outreach strategy by incorporating AI-driven workflows , agent pipelines , and immersive web architecture . This project will reimagine how ARCortex presents and promotes its capabilities—shifting from traditional marketing to automated, intelligent, and interactive experiences powered by recent advances in the Model Context Protocol (MCP), A2A protocol, and frameworks like NANDA (Neural Architecture for Networked Digital Agents). The initiative will explore: AI-based customer journey automation using conversational agents and multi-channel engagement tools (email, LinkedIn, X, web chat) Intelligent content creation pipelines that generate personalized outreach content, interactive demos, and tailored pitches for different industries using LLMs and design AI MCP-compatible modular marketing architecture , enabling real-time updates to campaigns, messaging, and digital presence by orchestrating specialized AI agents Immersive web redesign with spatial UI elements, 3D/AR interfaces, and dynamic scene generation to showcase ARCortex’s products in context AI analytics and feedback loops , tracking campaign effectiveness and autonomously suggesting pivots or optimizations based on engagement metrics Updated Deliverables: Comprehensive marketing strategy document integrating agent-based automation, AI content generation, and immersive storytelling Redesigned web mockup or prototype featuring spatial layout options (e.g., floating, room-scale, or terrain-grounded AR content), built using XR-friendly technologies Multi-platform campaign automation plan , including prompt libraries and workflows for personalized outreach using tools like AutoGPT, CrewAI and others Feasibility and resource analysis covering platform costs (e.g., ElevenLabs, Zapier, ChatGPT API), content pipeline scalability, and human-in-the-loop requirements Prioritized rollout roadmap , with high-impact, low-cost actions first (e.g., LinkedIn automation + demo capture outreach), leading to full deployment of AI-powered marketing infrastructure

Simulation and Rendering of Structure Fires and Wildfires Using AI and Novel Rendering Techniques
This project tasks a team of student interns with developing a dual-mode fire simulation and visualization system—one focused on structure fires inside buildings, and the other on wildfires across natural and semi-urban landscapes. The goal is to combine emerging rendering technologies and physically-informed simulation models to create visually compelling, data-driven, and computationally efficient representations of fire behavior in different environments. For the structure fire module , students will simulate how fire and smoke propagate through a building using architectural geometry and material properties as key inputs. Fire behavior will be influenced by the location and type of fuels encountered (e.g., drywall, wood flooring, fabric furniture), airflow between rooms, and barriers like closed doors. Smoke and flame spread will be animated using efficient volumetric or particle methods, and enhanced with modern techniques such as Gaussian splatting or neural texture synthesis to achieve realistic effects suitable for mobile or AR deployment. The wildfire module will focus on modeling fire progression across large-scale outdoor terrain. Students will incorporate available environmental data—such as terrain elevation, vegetation types, satellite fire perimeter observations, and weather forecasts (e.g., wind, humidity)—to simulate wildfire behavior over time. The team will integrate propagation models (either rule-based or data-driven) and visualize the output in a way that clearly communicates risk zones, direction of spread, and burn intensity. Rendering will be optimized to handle large areas while maintaining immersive quality, potentially leveraging AI models for dynamic smoke and fire visualization at scale. Throughout the project, students will learn to combine physics-informed modeling, real-time graphics techniques, and AI-driven rendering to prototype tools that could support decision-making or situational awareness in firefighting, training, or public safety AR applications. They will work collaboratively to build, test, and document modular components, potentially using Unity or similar engines, and investigate performance trade-offs between visual realism and computational efficiency.

Cybersecurity Compliance and Risk Assessment for AR Solutions
ARCortex, a company specializing in Augmented Reality (AR) solutions, aims to compete for government programs and access Controlled Unclassified Information (CUI). To achieve this, the company must comply with stringent cybersecurity policies such as the Cybersecurity Maturity Model Certification (CMMC) and NIST 800-171. This project involves assessing the current cybersecurity posture of ARCortex and developing a comprehensive plan and roadmap to implement necessary fixes. The assessment will include a detailed review of the company's general IT environment and a focused analysis on the unique cybersecurity risks associated with AR devices like Head-Mounted Displays (HMDs) and drones. Urgent recommendations should be identified and implemented promptly to mitigate immediate risks.

Drone-Based IoT Sensor Localization and Optimal Positioning
ARCortex has developed a drone equipped with multiple antennae capable of receiving data from IoT sensors on the ground. The primary objective of this project is to evaluate the drone's ability to map or triangulate the locations of these IoT emitters using the signals received by each antenna. The project will involve understanding the limitations and specifications required to achieve accurate localization. Additionally, the project aims to determine the optimal positions for the drone to both receive data from each IoT sensor effectively and maintain a strong communication link with the Ground Control Station (GCS) collecting the data. This project will allow learners to apply their knowledge in signal processing, drone navigation, and IoT systems. Key tasks include: - Analyzing the signal data received by the drone's antennae. - Developing algorithms for triangulating the positions of IoT sensors. - Identifying limitations and necessary specifications for accurate localization. - Determining optimal drone positions for data reception and GCS communication.

Real-Time AR Mesh Alignment Using AI
ARCortex is seeking the development of a real-time process or algorithm to align and merge static terrain mesh data with dynamically scanned 3D mesh data collected locally around the user. The goal is to ensure accurate collision detection and occlusion in their AR system, which is built using Unity and targets Android and iOS platforms. The project involves exploring AI techniques to enhance the speed and robustness of the alignment process. The solution should be capable of running on a mobile device or communicating with the Unity app via an API, even if it requires a separate process or machine. This project provides an opportunity for learners to apply their knowledge of computer vision, AI, and AR development in a practical, industry-relevant context. Key points: - Develop a real-time algorithm for aligning static and dynamic 3D mesh data. - Ensure accurate collision detection and occlusion in the AR system. - Explore AI techniques for faster and more robust alignment. - Implement the solution to be compatible with Unity on Android and iOS.