The Story: From Understanding Minds to Empowering People

A narrative journey through research, impact, and human-centered innovation

## The Foundation: Understanding How People Think {#foundation} My journey into UX research didn't start with design tools or user interviews. It began with a fundamental question: *How do people actually think, learn, and make decisions?* As a first-generation college student pursuing doctoral studies in Cognitive Psychology at the University of Alabama, I discovered that understanding human cognition wasn't just an academic exercise—it was a lens for seeing how technology could genuinely serve people. While studying information processing models, cognitive load theory, and human-computer interaction psychology, I realized something crucial: the gap between what technology *can* do and what people *actually need* is where meaningful innovation happens. This realization came from a deep-seated belief that education is the cornerstone of a prosperous life—a principle that would later shape everything from my approach to AI literacy to my commitment to inclusive design. Being first-gen meant navigating systems without a roadmap, which taught me to question assumptions, seek clarity in complexity, and recognize that empowerment comes through understanding, not just access. --- ## The Pivot: From Academic Research to Real-World Impact {#pivot} The transition from academic research to UX practitioner wasn't a straight path. As shared in the Push Pull Podcast, moving from cognitive psychology research to industry required reframing skills—not abandoning them, but translating rigorous scientific methodology into actionable product decisions. What I discovered was that my training in cognitive psychology wasn't a limitation—it was a superpower. While others might focus on what technology *could* do, I was trained to understand what people *actually* need, especially when they're under stress, facing cognitive overload, or navigating unfamiliar systems. This perspective became my professional identity: **A cognitive psychologist who turns fuzzy problems into clear product decisions.** It's not about choosing between research rigor and practical application—it's about using evidence-based understanding to create technology that truly serves people. --- ## The Philosophy: Making AI Practical, Respectful, and Genuinely Useful {#philosophy} Early in my career, I developed a philosophy that would guide every project: *"Making AI feel practical, respectful of humans, and genuinely useful."* This isn't just a tagline—it's a commitment that emerged from watching AI technologies promise transformation while often failing to understand real human needs. As I wrote in *Igniting Curiosity, Empowering Students with AI Literacy*, "AI is kind of like fire. It has incredible potential to improve our lives but also carries risks we must understand." The key insight? AI literacy isn't about technical jargon or abstract concepts. It's about empowering people to understand how AI affects their lives, make informed decisions, and maintain sovereignty over their choices. This philosophy connects everything I do—from crisis response systems to organizational training programs to educational initiatives. --- ## Impact Through Research: Stories That Matter {#impact} ### When Crisis Demands Innovation: LAHelpNow In January 2025, devastating wildfires swept through Los Angeles, displacing thousands of individuals and families overnight. The immediate challenge wasn't just technical—it was human. People experiencing acute trauma and cognitive overload needed verified resources, but existing systems were fragmented, overwhelming, and culturally insensitive. This wasn't a theoretical problem. It was life-or-death decisions requiring accurate, timely information. And it demanded immediate action. Co-founding LAHelpNow with Cynthia Leimbach, I applied everything I'd learned about cognitive psychology, UX research, and ethical AI to design "Lala," an AI-powered digital assistant built specifically for crisis response. The innovation wasn't just in the technology—it was in applying cognitive load theory to design for extreme stress states, ensuring trauma-informed responses, and building safeguards against AI hallucinations that could provide dangerous misinformation. Within weeks, we evolved from an AI-powered hackathon concept to a fully operational NGO. The system wasn't just deployed—it became a scalable model for disaster response, recognized as groundbreaking in AI-driven crisis assistance and featured on the UX Spotlight podcast. **The learning?** Ethical AI can be deployed responsibly in life-or-death situations when grounded in cognitive psychology and UX research. Rigorous methodology can adapt to emergency timelines without sacrificing quality. And speed must always be balanced with inclusive design, especially during crisis. ### Equity-Centered Research: The AI & Education Summit While LAHelpNow addressed immediate crisis needs, another project tackled a systemic challenge: ensuring AI in education serves everyone, not just those with resources. Over eight months, I co-led a collaborative research initiative that brought together 75 experts at the intersection of education and AI. The challenge? Most conversations about AI in education excluded resource-constrained communities and marginalized voices. Research was often paywalled or proprietary, creating barriers to the very people who needed it most. Our solution was innovative: an anonymous, equitable platform where all voices were equally valued, regardless of status or affiliation. We secured sponsorship for specialized technology, facilitated live brainstorming sessions, and conducted thorough analysis to produce an open-source report on inclusive learning environments. The result? A comprehensive resource freely accessible to anyone, with actionable approaches specifically designed for resource-constrained contexts. This project reinforced a core belief: education is a fundamental right, and knowledge should be open-source, not proprietary. **The insight?** Anonymous platforms democratize expert participation. Equitable research design isn't just ethical—it produces better outcomes. And enabling sovereignty in people requires removing barriers to knowledge, not just providing access. ### Organizational Transformation: The AI Think Tank Not all impact happens in crisis or through large-scale research initiatives. Sometimes, transformation happens through sustained, patient work within organizations. I designed and facilitate a bi-weekly "AI Think Tank" where engineers and everyday users share wins, missteps, and ideas. The challenge was familiar: engineers understand technology but not user needs; users need AI but lack technical understanding. Organizational silos prevented knowledge sharing, and fear of AI's role led to low adoption. The solution? Create psychological safety for honest sharing—including failures. Bridge technical and non-technical teams through bidirectional learning. Ground everything in real B2B workflows and actual use cases, not abstract concepts. The impact has been steady: increased AI literacy across all skill levels, improved human-AI collaboration, reduced fear and resistance, and a cultural shift toward experimentation and learning from failure. Most importantly, it's made AI "practical, respectful of humans, and genuinely useful" in real organizational contexts. **The principle?** Sustained engagement beats one-time training. Failure shared becomes learning accelerated. And cross-pollination between engineers and users creates exponentially better AI implementation. --- ## The Approach: Evidence-Based + Empathetic {#approach} What connects these diverse projects? A consistent approach that combines rigorous research methodology with deep empathy for human experience. ### Research Rigor Every project starts with understanding the problem through multiple lenses: - **User interviews and stakeholder consultations** to surface what people actually need - **Usability studies and field work** to observe real behavior, not just reported behavior - **Data analysis and synthesis** to transform fuzzy problems into clear product decisions - **Mixed methods research design** that honors both quantitative patterns and qualitative insights ### Cognitive Psychology Application My training in cognitive psychology isn't theoretical—it's practical: - **Cognitive load theory** informs design for users under stress (like crisis response systems) - **Information processing models** guide data visualization and interface design - **Trauma-informed design approaches** ensure technology serves people in vulnerable states - **Learning and memory considerations** shape how AI literacy is taught and understood ### Ethical Framework Ethics isn't an afterthought—it's foundational: - **Responsible AI deployment** that respects privacy and promotes transparency - **Bias mitigation strategies** that acknowledge and address systemic inequities - **User consent and control mechanisms** that maintain human sovereignty - **Advocacy against exploitative research practices** that extract insights without consent ### Cross-Disciplinary Collaboration The best solutions emerge from diverse perspectives: - **Bridging technical and non-technical teams** through translation and facilitation - **Translating across disciplines** (psychology, AI, education) - **Creating psychological safety** for honest sharing and learning - **Detail-oriented leadership** that maintains big-picture vision while managing intricate execution --- ## Core Values: Empowerment, Equity, and Evidence {#values} Underlying every project are core values that shape decision-making: ### Empowerment & Sovereignty A deep passion for enabling sovereignty in people—not just providing tools, but ensuring people understand how to use them, when to question them, and how to maintain control over their choices. This means education is a cornerstone of prosperity, and knowledge should empower, not overwhelm. ### Human-Centered Technology Technology serves people, not the reverse. This means focusing on real human needs in real workflows, designing for accessibility and inclusivity, and ensuring AI feels "practical, respectful of humans, and genuinely useful." ### Equity & Inclusion Centering resource-constrained and marginalized communities isn't optional—it's essential. This means anonymous, equitable participation mechanisms, open-source knowledge for maximum accessibility, and cross-cultural sensitivity in every design decision. ### Research Integrity Evidence-based decision making, ethical participant treatment, transparency in methods and findings, and quality over speed (except when crisis demands rapid response). This also means advocating against "dark research patterns" that exploit participants. --- ## Looking Forward: Vision and Continued Commitment {#forward} The future of AI and human-computer interaction isn't predetermined. It's being shaped by the decisions we make today—about who has access to AI literacy, how we design for diverse cognitive needs, and whether we prioritize human sovereignty over technological capability. My commitment moving forward is clear: **Continue bridging disciplines** to create solutions that honor both technical possibility and human reality. Whether it's crisis response, education equity, or organizational transformation, the approach remains the same: understand the human experience, apply rigorous research, and design with empathy and ethics. **Advocate for inclusive AI literacy** that empowers people across all backgrounds and technical skill levels. This means translating complex concepts into accessible frameworks, creating open-source resources, and ensuring marginalized voices are centered, not excluded. **Maintain the balance** between innovation and responsibility, speed and quality, technical capability and human need. This balance isn't a compromise—it's what makes technology genuinely useful rather than merely impressive. **Enable sovereignty** through education, understanding, and empowerment. Because when people understand how AI affects their lives, they can make informed decisions, maintain control, and participate actively rather than passively. --- ## The Thread That Connects Everything {#thread} If there's one thread that connects my journey from first-gen college student to cognitive psychology researcher to UX practitioner to AI literacy specialist, it's this: **understanding how people think, feel, and behave isn't just interesting—it's essential for creating technology that truly serves humanity.** Every project, from crisis response to education equity to organizational transformation, starts with the same question: *What do people actually need?* Not what technology can do. Not what's theoretically possible. But what real people, in real contexts, with real constraints, actually need. This question requires rigorous research, empathetic understanding, ethical commitment, and cross-disciplinary collaboration. It demands that we center marginalized voices, prioritize human sovereignty, and ensure knowledge is accessible, not proprietary. Most importantly, it requires recognizing that AI—like fire—has incredible potential to improve lives, but also carries risks we must understand. Making AI "practical, respectful of humans, and genuinely useful" isn't a marketing slogan. It's a commitment to ensuring technology serves people, not the other way around. This is the story. This is the work. This is the vision. --- *For specific project details, case studies, or collaboration inquiries, explore the portfolio projects or connect via LinkedIn.*

Explore the Work

See how these principles come to life in real projects