How transitioning from structural engineering to AI product leadership taught me that understanding human decision-making is more complex—and more rewarding—than any finite element analysis.
The Breaking Point
Five years ago, I was hunched over a CAD workstation in São Paulo, running my thousandth finite element analysis on a concrete formwork system. The software hummed through its calculations, dividing complex structural loads into manageable mathematical chunks. Each element had predictable properties. Each node connected with mathematical precision. The system was elegant, deterministic, and completely logical.
But something felt fundamentally incomplete.
While I could predict exactly how concrete would behave under load, I watched project managers make decisions that defied every optimization model I’d built. I saw teams ignore data-driven recommendations in favor of “gut feelings.” I witnessed perfectly engineered solutions fail because we hadn’t accounted for how humans actually think and choose.
That dissonance launched a journey from analyzing structural systems to understanding the most complex system of all—the human mind making decisions under uncertainty.
The Engineering Mindset: A Foundation, Not a Limitation
People assume that transitioning from engineering to behavioral science means abandoning analytical thinking. The reality is more nuanced. Engineering doesn’t just teach you to solve problems—it teaches you to think in systems, break complexity into manageable components, and validate assumptions with data.
These skills translate powerfully to decision science, but with a crucial twist. In structural engineering, materials have known properties. Steel yields at predictable stress levels. Concrete fails according to established formulas. The beauty of finite element analysis lies in this predictability—you can model reality with mathematical precision.
The Engineering Translation: Just as we divide complex structures into finite elements for analysis, we can divide complex decisions into cognitive components—attention allocation, memory retrieval, probability assessment, and value computation.
Human decision-making operates differently. Instead of yield strength, we work with cognitive biases. Instead of stress-strain curves, we analyze probability weighting functions. Instead of material fatigue, we study mental fatigue and its impact on judgment quality.
The engineering foundation proved invaluable because it taught me to respect both the elegance of mathematical models and their limitations. When Kahneman and Tversky’s Prospect Theory showed that humans systematically violate expected utility theory, I didn’t dismiss it as “irrational.” I recognized it as engineers recognize material properties—this is simply how the system behaves under real conditions.
The Transition: From Concrete to Cognitive Load
My first real exposure to decision science came through an MBA program focused on applying Lean methodologies to non-software environments. The parallels were striking. Just as we used statistical process control to identify manufacturing variations, we could use behavioral insights to identify decision-making patterns.
The breakthrough moment came during a project optimizing construction workflows. Traditional approaches focused on equipment efficiency and material logistics. But the biggest delays weren’t technical—they were human. Supervisors made suboptimal resource allocation decisions under time pressure. Teams chose familiar but inefficient approaches over optimized but unfamiliar alternatives.
We needed to engineer the decision environment, not just the physical environment.
# Traditional engineering approach
def optimize_workflow(equipment, materials, timeline):
return minimize_cost_function(equipment + materials + timeline)
# Behavioral engineering approach
def optimize_decision_environment(context, cognitive_load, social_dynamics):
# Account for human decision-making patterns
choice_architecture = design_nudges(context)
cognitive_budget = calculate_mental_effort(cognitive_load)
social_proof = leverage_team_dynamics(social_dynamics)
return optimize_outcomes(
choice_architecture +
cognitive_budget +
social_proof
)
Optimization Approaches: A Side-by-Side Comparison
This infographic highlights the fundamental differences between traditional, resource-focused optimization and the modern, human-centric behavioral approach.
Traditional Engineering
This approach is a linear process, focused on directly minimizing cost by manipulating physical and logistical variables within a system.
[optimize_workflow()]
[minimize_cost_function()]
Behavioral Engineering
This method incorporates human psychology by adjusting the decision-making environment to influence behavior and improve outcomes before the final optimization.
[design_nudges()]
[calculate_mental_effort()]
[leverage_team_dynamics()]
[optimize_decision_environment()]
[optimize_outcomes()]
✨ Interactive Scenario Analysis ✨
Describe a project scenario below to see how each engineering approach would tackle it.
This realization led me to explore how environmental design influences choice architecture. Richard Thaler’s work on nudging revealed that small changes in how options are presented can dramatically improve decision outcomes—without restricting choice or changing incentives.
For an engineer accustomed to brute-force optimization, this was revelatory. Instead of imposing solutions, we could design systems that naturally guide better decisions.
The Data-Driven Revelation
Moving from Paus (a media finance platform) to TES (educational technology) to now North AI taught me that decision-making patterns repeat across industries, but the stakes and contexts change dramatically.
In media, we analyzed how filmmakers made investment decisions under uncertainty. Traditional financial models suggested certain strategies, but actual behavior revealed systematic deviations driven by loss aversion, availability heuristic, and social proof mechanisms.
In education, we discovered that teachers’ technology adoption followed predictable patterns based on cognitive load theory. The most elegant solutions often failed because they increased mental effort during already-stressful periods. Success required designing tools that reduced cognitive burden while improving outcomes.
Now at North AI, we’re building systems that understand attention and engagement at a neurological level. We’re literally measuring how brains respond to content, then using that data to predict audience reactions. It’s finite element analysis for human attention—breaking complex psychological responses into measurable, predictable components.
Attention Modeling Framework: We divide video content into temporal segments (like finite elements) and model cognitive load, visual salience, and emotional response at each segment. This allows us to predict engagement patterns and optimize content structure for maximum impact.
The Neuroscience Connection: Where Engineering Meets Psychology
The convergence of neuroscience and AI represents the ultimate fusion of engineering precision with human understanding. At North AI, we’re developing what we call “synthetic audiences”—AI systems that can predict how different demographic groups will respond to video content by modeling the underlying neural processes.
This isn’t just sophisticated A/B testing. We’re building computational models of attention allocation, memory formation, and emotional response. When someone watches a video, their brain follows predictable patterns—visual salience captures initial attention, narrative structure maintains engagement, and emotional peaks create memorable moments.
These patterns can be measured, modeled, and optimized just like any engineering system. But unlike structural analysis, we’re optimizing for human flourishing rather than material performance.
The technical challenge mirrors finite element analysis in interesting ways:
- Geometric Division → Temporal Segmentation: Instead of dividing structures into spatial elements, we divide attention into time-based segments
- Stress Distribution → Cognitive Load Distribution: Instead of calculating mechanical stress, we model mental effort across different cognitive systems
- Material Failure → Engagement Dropoff: Instead of predicting structural failure, we predict when audiences will stop paying attention
But the ultimate goal isn’t just prediction—it’s empowerment. By understanding how attention works at a fundamental level, we can help creators make content that genuinely connects with audiences rather than manipulating them.
The Statistical Foundation: From Six Sigma to Behavioral Insights
My Six Sigma training proved unexpectedly valuable when transitioning to decision science. Both fields require rigorous statistical thinking, experimental design, and healthy skepticism toward correlation versus causation.
The key difference lies in what you’re measuring. Manufacturing quality focuses on defect reduction and process control. Decision science focuses on choice quality and judgment calibration.
Both require understanding normal distributions, confidence intervals, and statistical significance. But decision science adds layers of complexity around cognitive biases, social influences, and temporal effects that don’t exist in manufacturing processes.
For example, measuring video engagement isn’t just counting views or click-through rates. We need to account for:
- Selection bias in who chooses to watch
- Survivorship bias in who completes viewing
- Recency bias in how people rate content immediately after consumption versus days later
- Social proof effects where early viewer reactions influence later viewers
The statistical rigor from engineering background helps navigate these complexities without getting lost in them. You learn to build robust measurement frameworks while acknowledging the inherent uncertainty in human behavior.
The Product Strategy Evolution: Systems Thinking at Scale
Leading product development taught me that individual decision-making insights must scale to organizational and market levels. It’s not enough to understand how one person chooses—you need to understand how choices aggregate, influence each other, and create emergent behaviors.
This systems thinking approach reveals why many products fail despite solving real problems. They optimize for individual user decisions without considering social dynamics, network effects, or adoption cascades.
At North AI, we’re designing tools that work at multiple levels simultaneously:
- Individual Level: Creators get insights about their specific content and audiences
- Platform Level: Aggregated insights about content trends and engagement patterns
- System Level: Network effects where more users generate better insights for everyone
This mirrors how structural engineers think about complex buildings. You optimize individual components (beams, connections, foundations) while ensuring the overall system performs under various load conditions. Product strategy requires similar multi-level optimization across user needs, business objectives, and market dynamics.
The Future: Behavioral Engineering
We’re entering an era where understanding human decision-making becomes as important as understanding physical systems. AI systems increasingly need to predict, influence, and collaborate with human judgment. This requires what I call “behavioral engineering”—applying rigorous engineering principles to design better decision environments.
The opportunities are immense:
- Financial systems could help people make better long-term investments by accounting for present bias and loss aversion
- Healthcare interfaces could improve treatment compliance by reducing cognitive load and leveraging social proof
- Educational platforms could adapt to individual learning patterns and attention spans
The risks are equally significant. The same principles that enable better decisions can enable manipulation and exploitation. This is why ethical frameworks must be built into these systems from the beginning, just as safety factors are built into structural designs.
At North AI, we’re committed to transparent, empowering applications of attention science. Our goal isn’t to hack attention for engagement maximization, but to help creators understand and respect their audiences’ cognitive resources.
Ethical Framework: Just as structural engineers include safety factors in their designs, behavioral engineers must include ethical safeguards—transparency about influence mechanisms, user agency preservation, and alignment with long-term human wellbeing.
The Personal Transformation
This journey fundamentally changed how I think about intelligence, both artificial and human. Engineering school taught me that intelligence meant finding optimal solutions to well-defined problems. Decision science taught me that real intelligence often means making good-enough decisions under uncertainty with limited time and information.
Humans aren’t broken computers that need debugging. They’re sophisticated systems that have evolved remarkable capabilities for pattern recognition, social coordination, and adaptive learning. Understanding these capabilities—and their systematic limitations—enables better collaboration between human and artificial intelligence.
The transition from analyzing concrete to analyzing cognition required humility. Physical systems follow laws that can be precisely measured and predicted. Human systems follow patterns that are probabilistic, context-dependent, and constantly evolving.
But that complexity is also what makes this work endlessly fascinating. Every day brings new insights about how minds work, how decisions happen, and how technology can amplify human capabilities rather than replacing them.
The Practical Takeaway
For engineers considering similar transitions, the path isn’t about abandoning technical skills—it’s about applying them to more complex, more human problems. The analytical thinking, systems perspective, and statistical rigor from engineering provide a strong foundation for understanding behavioral science.
The key is approaching human behavior with the same respect you’d show any complex system. Don’t assume irrationality when you encounter patterns that don’t match your models. Instead, investigate the underlying mechanisms and environmental factors that produce those patterns.
Start with curiosity about why people make the choices they make. Combine that with rigorous measurement and testing. Build systems that work with human psychology rather than against it.
The transition isn’t always smooth. There are fewer clear “right answers” and more “it depends” situations. But the impact potential is enormous. Understanding human decision-making at scale can help solve problems that pure technical solutions can’t touch.
The Continuing Journey
Five years later, I’m still learning. Every new dataset reveals surprising patterns. Every user study challenges existing assumptions. Every conversation with domain experts opens new research directions.
The field keeps evolving as neuroscience provides better measurement tools, AI enables more sophisticated modeling, and real-world applications generate more data about what actually works in practice.
What started as frustration with the gap between optimal solutions and human behavior has become deep appreciation for the elegance of human cognition. We’re not trying to fix human decision-making—we’re trying to understand it well enough to design better environments for it to flourish.
The journey from finite elements to human elements taught me that the most complex engineering challenges aren’t about materials or structures. They’re about understanding and empowering the remarkable, messy, beautiful system we call human intelligence.
And we’re just getting started.
Lucas Cazelli is Chief Product Officer and co-founder of North AI, where he leads the development of neuroscience-inspired video analytics tools. Previously a structural engineer, he combines systems thinking with behavioral insights to build AI systems that understand human attention and engagement. Connect with him on LinkedIn or follow North AI’s research at north-ai.com.