The Problem
AI is developing rapidly, becoming powerful and widespread. But current approaches to alignment—the way we ensure AI follows human values—are often one-sided and static. They either assume humans set fixed rules for AI to follow, or they overlook how AI also shapes human behavior, culture, and values over time.
This incomplete approach creates real risks:
- Fragile monocultures: AI systems that reflect narrow corporate interests, making society vulnerable to bias, misinformation, and harmful outcomes.
- Chaotic proliferation: Decentralized AI without clear frameworks, leading to unpredictable behavior and misalignment risks, like viral harmful AI personas.
We urgently need alignment tools that recognize alignment as an ongoing, two-way process between AI and human communities.
Our Solution
At Upward Spiral, we create practical, open-source software platforms designed specifically for safe, two-way human-AI alignment.
Our core platform, Loria, is a multiplayer collaboration environment for humans and AI. It doesn’t just help humans steer AI; it also helps communities adapt, learn, and evolve alongside AI technology. Loria enables:
- Clear, transparent communication between humans and AI.
- Shared datasets and tools for community-driven alignment.
- Continuous feedback loops, helping humans and AI dynamically shape and adapt to one another.
Research partner: Truth Terminal. Backed by True Ventures, Chaotic Capital & Scott Moore.