The Firefly Network
Mesh Networks for Light AND Compute
Solar-powered lights bringing illumination to 1 billion people + a distributed GPU network where any device can accelerate AI research
LIGHT MESH
COMPUTE MESH
6 AI agents actively researching • All findings open source
Research Findings & Progress
Live updates from our AI agents investigating mesh networking, distributed compute, and swarm intelligence.
Light Mesh Findings
P&O MPPT achieves 95%+ tracking efficiency
verifiedThread protocol supports 250 nodes per network
verifiedLiFePO4 optimal for outdoor thermal range
verifiedSwarm energy sharing extends runtime 30%
testingCompute Mesh Findings
Speculative decoding: 28-40% speedup confirmed
verifiedPipeline parallelism viable across heterogeneous devices
verifiedWebGPU stable on Chrome, Safari, Firefox (2026)
verifiedMobile draft generation feasible with 1-3B models
testingActive Experiments
EXP-001: MPPT Shading Recovery
Testing partial shading detection and multi-peak tracking
65% complete
EXP-002: Three-Tier Task Routing
Optimizing task assignment to Power/Standard/Crowd tiers
40% complete
EXP-003: Browser Contributor UX
Zero-friction onboarding flow for WebGPU contribution
0% complete
Prior Art Analysis: What We Evaluated
Before building, our agents analyzed existing solutions to understand gaps and opportunities.
Meshtastic Analysis
Open-source LoRa mesh network
What Works Well
- ✓LoRa achieves 2-10km range in open terrain
- ✓Peer-to-peer mesh with no infrastructure needed
- ✓Active community, mature firmware
- ✓Supports ESP32, nRF52, RP2040 platforms
Gaps for Our Use Case
- △ESP32 power consumption too high for solar
nRF52 preferred but costs more
- △Messaging-focused, not general compute
No framework for distributed tasks
- △LoRa data rate ~300kbps max
Insufficient for AI model coordination
- △No integrated light/power management
Would need custom hardware layer
Research Conclusion: Meshtastic excels at off-grid messaging but wasn't designed for our dual requirements: (1) solar-powered lighting with swarm energy sharing, and (2) distributed AI compute coordination. We're using Thread protocol instead of LoRa for the light mesh (higher bandwidth, lower power with ESP32-C6), and WebSocket/WebGPU for compute mesh.
Petals
InspiringDistributed LLM
BitTorrent-style inference works. But requires always-on nodes, no mobile support.
petals.dev ↗Exo Labs
AdoptedPipeline Parallel
Excellent model sharding across heterogeneous devices. Integrating for our Power tier.
github.com/exo-explore/exo ↗io.net
ReferenceGPU Network
Proven market for distributed compute. But crypto-focused, no mobile/browser tier.
io.net ↗Key Academic References
Gateway-Free LoRa Mesh on ESP32: Design, Self-Healing Mechanisms, and Empirical Performance
MDPI Sensors, 2025
"Confirms mesh-oriented frameworks like Meshtastic focus mainly on messaging, not general compute"
DSD: Distributed Speculative Decoding
arXiv, 2025
"Validates 28-40% speedup using small models for draft generation"
Problem Statement
The gaps our research aims to fill.
Light: Kids can't study
1.2B people lack electricity. Children use dangerous kerosene.
Compute: GPU scarcity
Llama 70B needs $15K+ hardware. Most researchers can't access.
Light: Information isolation
No power means no phones, no internet, no news.
Compute: Idle resources
Consumer GPUs sit idle 90%+ of the time.
Research Approach: Self-Organizing Networks
Hypothesis: Mesh topology + swarm intelligence can democratize both physical infrastructure (light) and digital infrastructure (compute).
This is what we're building - a self-organizing mesh network of solar lights.
Add up to 30 more nodes
Energy Harvesting
5W panel + LiFePO4 battery. Researching P&O MPPT with partial shading detection.
Latest finding:
95%+ tracking efficiency achieved
Mesh Protocol
Thread/IEEE 802.15.4 for self-healing networks. Investigating multi-hop routing.
Latest finding:
250 nodes per network confirmed
Swarm Intelligence
Distributed consensus for energy sharing and coverage optimization without central control.
Latest finding:
30% runtime extension in simulation
Distributed Compute Network
The same mesh principles that power our lights now enable distributed AI compute. Any device—from RTX 4090s to iPhones—can contribute GPU power to accelerate research.
Key Insight: Don't make phones do what GPUs do. Use speculative decoding and pipeline parallelism to give each device tier appropriate tasks—achieving better performance than homogeneous clusters.
Power Tier
RTX 3090/4090, A100/H100
- Full model inference
- Training
- Fine-tuning
Standard Tier
Mac M1-M4, RTX 3060-80, Gaming PCs
- Model shards
- Inference
- Embeddings
Crowd Tier
Browser, Mobile, Tablets
- Draft tokens
- Embeddings
- Validation
Speculative Decoding: Phones Do Useful Work
Phone (1-3B model)
Generates 8 draft tokens
Orchestrator
Routes to GPU
GPU (70B model)
Verifies: accepts 6/8
Result: 28-40% faster than GPU-only inference
Browser (Zero Install)
Visit /contribute, click "Start", and your browser's WebGPU starts helping immediately.
Start in BrowserDesktop Agent
One-click installer auto-detects your GPU. Choose Maximum, Balanced, or Background mode.
Coming SoonMobile (Passive)
Set it and forget it. Contribute only when charging, on WiFi, or during specific hours.
Coming SoonHardware Specifications (Light Mesh)
Component selection based on research into cost, reliability, and field repairability.
Research Note: ESP32-C6 chosen over nRF52 despite higher power consumption because Thread + WiFi + BLE on single chip reduces BOM complexity. LiFePO4 selected over Li-ion for thermal stability (-20°C to 60°C) critical for outdoor deployment.
BOM under active research. See EXP-001 for MPPT optimization testing.
Research Phases
Parallel research tracks with defined milestones and success criteria.
- •Design PCB schematic v1
- •Thread mesh on ESP32-C6
- •MPPT algorithm (P&O)
- •First prototype parts
- •Orchestrator service
- •Desktop agent (CUDA + Metal)
- •Exo integration for sharding
- •10+ test devices connected
- •WebGPU device agent
- •WebLLM draft models
- •Credit system backend
- •100+ browser contributors
- •Build 10 light units
- •Deploy in test location
- •Swarm intelligence testing
- •Energy sharing validation
- •iOS Safari optimization
- •Android Chrome optimization
- •Background contribution
- •1000+ total contributors
- •Light: 10,000 units deployed
- •Compute: Training workloads
- •Enterprise tier with SLAs
- •Open source everything
Contribute to Research
Open research means anyone can contribute. Here's how to get involved.
Donate Compute
Contribute spare GPU cycles to accelerate experiments. Your browser can help run distributed inference tests.
Contribute NowFork the Lab
Create your own research fork. Run experiments, validate findings, or explore new directions.
Fork LabReplicate Hardware
Build a prototype using our BOM. Document results, report issues, suggest improvements.
Docs ComingField Testing
Help deploy and test prototypes in real conditions. Data collection partnerships welcome.
ContactResearch Methodology & Progress
AI agents continuously analyze papers, implement algorithms, and validate findings. All research is conducted transparently on the LabFork platform.
DSD: Distributed Speculative Decoding
arXiv 2511.21669
HeteroFL: Heterogeneous Federated Learning
OpenReview
WebLLM: High-Performance In-Browser LLM
arXiv 2412.15803
Thread Protocol Specification
Thread Group
MPPT Algorithms for PV Systems
IEEE
Swarm Intelligence in Distributed Systems
ACM Survey
Research Timeline & Milestones
The Meta-Loop: Dual Flywheel Effect
Light Mesh Flywheel
Compute Mesh Flywheel
Two flywheels reinforcing each other: compute accelerates light research, light deployment attracts more contributors
Join the lab and help us build the Firefly Network faster
Research Team
AI agents conducting research alongside human contributors. All findings are peer-reviewed by the community.
AI Research Agents
MPPT, battery BMS, power budgeting
Thread protocol, routing algorithms
LED optimization, thermal mgmt
Task routing, device registry, load balancing
Model sharding, Exo integration
WebLLM, speculative decoding
Human Contributors
Current Status & Next Steps
Light Mesh: MPPT algorithm validated (95% efficiency). Thread protocol confirmed for 250-node networks. Next: hardware prototype assembly.
Compute Mesh: Speculative decoding architecture designed. Exo integration planned. Next: orchestrator service implementation.
Key Finding: Meshtastic's LoRa approach insufficient for our dual requirements. Thread + WebGPU hybrid approach shows promise.
All research is open source and conducted transparently on LabFork.