Linea

A Linea white paper

AI Momentum Guide

8 patterns stalling agentic AI progress in multifamily — and how to move forward.

By Joanna HackneyFounder, Linea
The AI Momentum Guide cover

Foreword

A note from Linea's founder

A few months ago, I stopped scrolling through the AI advice circulating in multifamily and started learning about agentic AI. I spent several months deep in the work — earning certifications through rigorous programs, working with the technology, and studying what was happening across the operators and PropTech leaders I work with most closely. I came in with assumptions I had picked up from the multifamily AI conversation over the last couple of years. Most of them needed to be unlearned.

Multifamily has experimented with AI for a couple of years now, but most of these are point solutions layered on top of existing workflows. They work better when your data is clean, and they're useful tools. But they are fundamentally different from agentic AI. Point solutions handle specific tasks within workflows that stay structurally the same. A chatbot still requires your leasing team to follow up. An email summarizer still requires humans to read and act on the summaries. They make existing work faster — they don't create meaningful financial and operational gains.

Agentic AI is different. It doesn't just respond or summarize. It acts. It operates across workflows with autonomy. It reimagines how work gets done entirely. When you deploy agentic AI, you're redesigning the workflow from the ground up. That distinction changes the build strategy and the value you can capture.

According to McKinsey Global Institute, agentic AI will unlock $430 to $550 billion in value across real estate. That value will not be realized by organizations simply deploying point solutions or layering AI onto existing workflows. It will be realized in organizations willing to reimagine how work gets done.

Through my research, I identified eight specific patterns that are stalling agentic AI progress across organizations of every size and type in this industry. These are not technical barriers. They are not about capability or access to tools. They are about how organizations think about agentic AI and how they approach deploying it. Multifamily is currently chasing only a fraction of the $430 to $550 billion opportunity. The eight patterns below are what stand between where the industry is today and where it could be.

"I came in with assumptions I had picked up from the multifamily AI conversation over the last couple of years. Most of them needed to be unlearned."

Joanna Hackney

The eight patterns

What's quietly stalling AI in your organization

01Pattern 01

Waiting for Perfect Data

The assumption

"Get your data house in order before you invite agentic AI in."

Why this pattern is costing you

The advice circulating in multifamily tells organizations to fix their data landscape first. Get your systems talking to each other. Resolve conflicts between databases that tell different stories about the same things. Then deploy agentic AI. The reasoning sounds responsible. But it misses something fundamental about how agentic AI works compared to the point solutions most organizations have already deployed. Multifamily has experimented with AI for a few years now — chatbots, document summarizers, analytics dashboards. Those are point solutions layered on top of existing workflows. They can only be as good as the information they have access to, so the advice to fix your data first makes sense for those tools. Agentic AI is different. It's not layered on top of a workflow — it reimagines the workflow entirely. Most multifamily organizations have fragmented data landscapes: leasing platforms that don't talk to accounting, resident apps that don't sync with maintenance, historical data living in three different places with three slightly different versions. If fixing all of that is required before deploying agentic AI, most organizations will be waiting years.

The thinking error

Data quality is a prerequisite for deploying agentic AI. Organizations approach data like any other infrastructure problem — build it right once, then deploy systems on top. Agentic AI inverts that logic. You don't know what data you need until you understand which specific workflows you're reimagining. A workflow for lease renewal requires different data than one for maintenance prioritization. You can't predict what matters until you define the work you're actually doing.

The consequence

Organizations commit to data remediation before deploying agentic AI. They audit systems, document integration gaps, scope the work, and allocate resources. Progress happens. Systems integrate. Standards align. But new data quality issues emerge because data is never "finished" — it's managed continuously as systems evolve. Timelines extend. Resources get redirected. The organization finds itself years into data work before any agentic AI has been deployed. Meanwhile, the window to capture competitive advantage is closing.

The reframe

"Stop asking whether your data is perfect. Start asking which workflow you want to reimagine."

Understand what data agentic AI needs to operate effectively within it. Agentic AI deploys within specific workflows, not enterprise-wide. Those workflows often have more consistent data than organizations expect because the requirements are bounded rather than enterprise-scale.

The organizations capturing value fastest don't wait for complete data remediation. They start with a scoped workflow in which agentic AI can operate effectively, deploy it, learn what constrains their work, and address data gaps as they emerge. The data work that happens is informed by real constraints discovered through actual deployment, not by predictions made before any agentic AI has ever operated in the organization. Often, the process of deploying within a scoped workflow surfaces exactly which data problems need to be addressed and in what order. Starting before everything is perfect isn't reckless. It's the only way to learn what's required.

02Pattern 02

Waiting for Perfect Stakeholder Alignment

The assumption

"You need everyone on board before you begin."

Why this pattern is costing you

Most organizations believe they need full stakeholder alignment before deploying agentic AI. Get the regional directors on board, operations aligned, IT comfortable, finance signed off, and legal clear before moving forward. The instinct makes sense — major organizational change requires buy-in. But this logic creates a specific kind of stall that's hard to recover from. Organizations spend months in alignment meetings because agentic AI is still largely theoretical for most participants, and alignment never fully arrives. Someone raises a concern nobody can fully address. Something feels risky. The group decides to loop in one more stakeholder. More meetings follow. Nothing starts. There is one exception that matters: the person at the top must be open to exploring how agentic AI could shape the organization's future. Not necessarily an enthusiast with all the answers, but someone willing to learn alongside the organization. Without that, nothing else moves.

The thinking error

Alignment is a prerequisite to deployment. Each stakeholder brings legitimate concerns based on their function — operations worry about workflow disruption, IT about security, finance about ROI, legal about liability. But consensus on something theoretical is impossible. You end up waiting for alignment that will never arrive.

The consequence

Organizations form alignment committees. Workshops proceed, risks are documented, approval matrices are proposed. Everything becomes a theoretical discussion about what might happen, what could go wrong, what should be worried about. The committee eventually recommends moving forward with conditions — which means more planning, more requirements, more definition before anything real gets built. Months pass. The organization declares itself aligned. Then the agentic AI gets built and operates nothing like what the committee predicted. The alignment that took months to achieve becomes irrelevant within weeks of deployment.

The reframe

"Real proof moves people faster than consensus ever will."

Start with the person at the top who is open to exploring agentic AI, then move to the people closest to the specific workflow you're reimagining. Show them what's being explored and let them see what changes. When the leasing team sees an agent handling 80% of renewal follow-ups, the regional director believes it. When the maintenance coordinator sees work orders being routed more intelligently than the manual system, buy-in follows naturally.

The organizations that move fastest start with proof and build alignment from there. Those conversations about whether to expand happen in the context of something real rather than something theoretical. Demonstration builds understanding in a way that alignment meetings never do.

03Pattern 03

Waiting for a Complete Governance Policy

The assumption

"You need a full AI governance policy in place before you can start working with agentic AI."

Why this pattern is costing you

Governance matters. How AI operates inside an organization, what it is authorized to do and access, how decisions get made, and where human oversight is required are all questions worth taking seriously. The timing and sequencing of how governance gets built is where most organizations go wrong. Building a governance policy before anyone in the organization has worked with the technology is like writing the rules for a game nobody has played yet. The result is something theoretical, built by committee, approved by stakeholders trying to make decisions about something they have never experienced. That process can take a very long time. And when the organization finally deploys, reality looks nothing like what the policy anticipated. So the policy gets rewritten. All that time gets lost twice.

The thinking error

Governance can be predicted before experience. Organizations identify all possible scenarios, document what should happen in each, and build guardrails that prevent bad outcomes before they occur. That works when risks are known and scenarios are bounded. But the organization does not yet know what decisions the agent will face or what trade-offs it will encounter. There is a deeper issue: AI systems make decisions. Without explicit direction from the organization about what to value, they fill the vacuum with whatever context is available. The AI is not making a judgment call — it's filling a vacuum left by the organization.

The consequence

A governance committee forms. Workshops happen. AI risks get documented in the abstract. A comprehensive policy gets drafted that covers scenarios nobody will encounter and misses scenarios nobody anticipated. It becomes increasingly conservative as each stakeholder adds concerns. It is approved. The organization declares itself governed. Then the agent gets built. Within weeks, it is making decisions the policy never addressed — different workflows, different handoff points, different failure modes, different value trade-offs. The agent is doing exactly what was asked of it, but what was asked of it does not align with what the governance policy anticipated. The policy gets revised. The organization has spent months on governance that did not apply to what it built.

The reframe

"Governance is not a prerequisite for starting. It is a product of starting."

Organizations that wait for a complete governance policy before engaging with agentic AI are making a significant time investment in something that will almost certainly need to be rewritten once they have experience to draw from. The question worth asking is whether you are more comfortable with a policy that has never been tested against reality, or with building governance around what you learn.

That question forces a different kind of conversation inside the organization. And that conversation is where real progress begins. Because what you encode into your governance — your values, your priorities, what makes you different — is ultimately what your agent will reflect into how your business operates.

04Pattern 04

Assigning AI to the Technology Team

The assumption

"AI is a technology initiative, so the technology team should own it."

Why this pattern is costing you

When new technology lands on the roadmap, most organizations do not stop to ask who should own it. The answer feels obvious — it goes to the team that always handles technology deployments. With agentic AI, that default assumption is one of the most consequential mistakes an organization can make. Third-party AI solutions — chatbots, document summarizers, analytics tools — can be vetted by cross-functional committees, piloted, and deployed through technology teams. That works because those solutions are self-contained. They integrate into existing workflows. They do not require ongoing reimagining of how work gets done. Agentic AI requires something entirely different. It is a reimagining of how work gets done inside the organization, and that requires a fundamentally different set of capabilities than the ones technology teams are structured to provide.

The thinking error

Technology ownership equals the right ownership for transformation. Technology teams are structured for linear execution: receive requirements, configure systems, deliver a functioning product. They optimize for technical feasibility — can we build this, will it function, does it meet the spec. Agentic AI requires different primary questions: which workflow should we reimagine, what is the business trying to accomplish, how should this agent think and prioritize, where do human values matter more than algorithmic efficiency. Those questions come before technical questions, not after.

The consequence

Organizations assign agentic AI to their technology teams. The team gathers requirements, designs the system architecture, builds the agent, and successfully deploys it. The agent performs as specified and integrates cleanly. But the workflow has not fundamentally changed. The agent automates parts of the existing process rather than reimagining what the process could be. Operations feel like the same work, slightly faster. Leadership wonders why the investment did not transform the business. There is also an adoption consequence: people who were not part of the agent's design do not understand its thinking. Trust is slower to develop. Adoption is harder to sustain.

The reframe

"Agentic AI requires strategic business and organizational leadership."

The people leading the initiative need to understand the business deeply. They need the authority to decide which workflows are worth reimagining. They need credibility to navigate organizational change. Technology teams are essential partners in building and deploying what gets designed. But ownership of an agentic AI initiative requires a different profile.

The organizations that capture real value from agentic AI operate on a different timeline and cost structure than traditional technology deployment. They build something scoped. They see what it does. They adjust based on what they learn. When iteration happens in hours instead of months, the cost of being wrong collapses. That fundamentally changes what kind of leadership matters. In the traditional model, you debate and align for months before building because the cost of getting it wrong is enormous. In the agentic AI model, you build and learn quickly because the cost of being wrong is low. The mismatch between what technology teams are built for and what agentic AI requires is what creates the failure.

05Pattern 05

Thinking Too Small About AI's Role

The assumption

"The goal of AI is to make existing workflows faster and give people their time back."

Why this pattern is costing you

Most organizations approach agentic AI by finding a workflow that consumes too much time, layering AI on top of it, and recovering some of that time for higher-value work. This thinking is understandable, and that is why most AI initiatives in multifamily produce incremental gains rather than transformation. According to McKinsey Global Institute, agentic AI will unlock $430 to $550 billion in value across real estate. Multifamily is currently chasing a fraction of that by focusing on making existing workflows slightly faster rather than asking what workflows could become if they were completely redesigned.

The thinking error

Optimizing existing workflows results in meaningful transformation. Most organizational improvement is incremental — make a process 10% faster, reduce errors by 15%. That works when you're improving something already headed in the right direction. But most multifamily workflows were designed years ago around constraints that no longer exist: leasing built around sequential coordination, maintenance around work orders, renewal around manager relationships. When you ask where AI fits into those workflows, you're asking the question within the constraints of a system designed without AI. The answer will always be incremental because the question assumes the framework is correct.

The consequence

Organizations identify a workflow that feels inefficient and layer agentic AI on top. The workflow moves faster. Operations improve. Leadership evaluates the return and decides whether to expand. The agent works. The workflow stays in the same shape it has always been. Humans still do their jobs, just slightly faster. The organization captures value, but not the value that agentic AI can generate. They have optimized a workflow that may have been broken from the start. The competitive advantage available through genuine transformation remains unexploited.

The reframe

"Reimagine the entire workflow to achieve your ideal business outcomes."

Ask what the workflow would look like if you redesigned it entirely with agentic AI at the center. That question produces a categorically different set of answers. It opens possibilities for who does what, when they do it, why they do it, where decisions are made, and which capabilities matter. The workflow that emerges is fundamentally different from the original, not a faster version.

The gap between incremental improvement and transformation lies in the willingness to question the framework rather than optimize within it. The organizations that capture the most value from agentic AI are the ones willing to ask harder questions about their workflows before they start building. They understand the existing workflow as a starting point for thinking, not a constraint on what's possible.

06Pattern 06

Keeping AI at the Leadership Level

The assumption

"Leadership needs to fully understand AI and have a complete plan before bringing it to the broader organization."

Why this pattern is costing you

Leaders want to present AI to their teams with confidence. They want to have answers before opening questions. They do not want to appear uninformed in front of the people they lead. So they go head down. They meet with vendors and consultants. They attend conferences. They build a plan. And then one day, they walk into a room and announce that AI is coming. The team hears one thing: are we being replaced? That fear is both predictable and avoidable. But it only gets avoided if transparency happens from the beginning — before the plan is built, before the vendors are selected, before anything is decided.

The thinking error

Certainty builds trust. Uncertainty erodes it. Agentic AI is new to the entire world — nobody has all the answers. An organization presenting AI as a solved problem before it has been deployed is building on false certainty. When reality does not match the plan (and it rarely does in early deployments), the credibility gap is far more damaging than the vulnerability of admitting uncertainty upfront. When leaders present a complete strategy, the team sees a decision that has already been made. They are no longer participants — they are recipients. That shift changes whether they feel ownership or resistance.

The consequence

Leadership researches in isolation, builds a strategy, and announces that AI is coming. The announcement presents AI as inevitable, as already decided. The team's first thought is what it means for them: will my job change, am I being replaced, will I lose control of decisions I currently make. Those questions surface immediately — but the team was never invited to the table before the questions arose. Trust is already eroded. What could have been a collective learning process becomes a top-down implementation. The moment the announcement happens, adoption becomes harder, and change management becomes necessary because the foundation of trust was never built.

The reframe

"Transparency about AI does not require having all the answers."

It requires a willingness to share what is being explored, why it matters, and what the organization is working toward before the decisions are made. Leaders who bring their teams into that process early build trust, making adoption possible. They create space for questions that do not yet have answers. They treat the development of an AI strategy as a collective exercise rather than an executive deliverable.

This approach does not make leadership look weak. It makes the organization more capable of adapting as understanding deepens. When people are part of the learning process from the beginning, they develop judgment about AI alongside the leaders. They understand not just what is being built, but why. And that understanding is what drives real adoption at the ground level, where agentic AI either delivers on its promise or quietly fails to gain traction.

07Pattern 07

Leaving Out the People Who Know the Work Best

The assumption

"AI should be designed by leadership."

Why this pattern is costing you

The people closest to the work know things no process map can capture. They know where the system breaks down and what workarounds have been in place for years. They know what the workflow looks like in real life versus on paper. They know the edge cases that happen once a quarter but cost the organization real money. Leaving them out of the design process results in an agent built on assumptions rather than reality. And the gap between those two things tends to surface at the worst possible moment — after the investment has been made and the expectations have been set.

The thinking error

Design is strategic work; frontline people do execution work. Organizations hold back from involving frontline people for reasons that feel reasonable: they don't want to add more to a stretched team, they're not ready to tell people that an agent will take over part of what they do, they want a more complete picture before opening the conversation. But that mental model breaks down with agentic AI. An agent that is supposed to reimagine a workflow cannot be designed without understanding how that workflow operates — and nobody understands that better than the people who do it every day. Asking frontline people to design the agent is not asking them to build the technology. It is asking them to define how the agent should think, what it should prioritize, and where it needs to hand off to a human.

The consequence

When frontline people are excluded, the agent gets built around what leadership and designers assume the workflow is. Those assumptions usually miss what matters. An agent designed for lease renewal follow-ups might be optimized for speed without understanding that relationship-building matters more than fast responses in certain situations. An agent routing maintenance might prioritize efficiency without knowing that resident communication preferences vary by property and require local knowledge. The agent deploys and breaks in ways nobody anticipated because it was designed around assumptions rather than reality. The frontline team knows exactly what went wrong. But they were not part of designing it, so their knowledge was never encoded. There is also an adoption dimension — these are the same people who will decide if the agent is worth trusting.

The reframe

"The people closest to the work are not obstacles to design. They are essential to it."

What they know about how workflows operate cannot be replaced by process maps or strategy conversations. When organizations treat frontline knowledge as input to the design of the agent, they build something that works in practice. When they treat it as something to be informed about after decisions are made, they end up building something that works in theory.

The frontline people working alongside the agent are not just users; they are builders. They are designers of how it operates. Their input into what the agent should prioritize, where it should defer to human judgment, and how it should communicate is design work. And it is work that makes the difference between an agent that transforms the workflow and an agent that automates the existing process.

08Pattern 08

Failing to Reenvision the Human Role

The assumption

"Deploy the agent, and the human side will figure itself out."

Why this pattern is costing you

Most agentic AI deployments have a detailed plan for the agent. Almost none have an equally detailed plan for the human working alongside it. And that gap is where significant value is lost. When organizations design an agentic workflow, the focus is almost always on what the agent will do, what it handles, what it takes off the team's plate. That part gets mapped out carefully. The human role gets assumed — the team will figure it out, they will naturally shift to higher-value work. That assumption is rarely tested before deployment, and in practice, it rarely holds.

The thinking error

Removal creates space; space creates migration to higher-value work. The logic seems sound: remove the drudgery, humans naturally move up the value chain. But migration does not happen automatically. Removing work from someone's workflow doesn't create the conditions for them to migrate to something more valuable — it creates a vacuum. In that vacuum, people experience confusion about where they fit, uncertainty about what they should be doing, and hesitation about whether the agent is trustworthy. Redesigning the human role requires design work. What does the human do when the agent reaches its limit? What oversight structures exist? What does success look like for someone whose job has fundamentally changed? These are design questions requiring the same rigor you give to designing the agent itself.

The consequence

When human roles are not redesigned, the workflow breaks in specific ways. Questions the agent cannot answer sit waiting for a human response, with no defined process for who handles them or how quickly. Handoffs happen without structure. Requests move from agent to human with no clear ownership. Gaps open up between what the agent does and what the human picks up. The gains expected from reimagining the workflow are absorbed by everything that was never reimagined.

The reframe

"Reenvisioning the human role must be designed with the same level of intention as the agent itself."

The humans working alongside your agent function as designers of how it operates, not merely as users. Their understanding of what the human should do when the agent cannot, which decisions stay with the human, what feedback loops tell the agent it is making mistakes, and how humans stay in control of a fundamentally reimagined process — these are all design questions requiring input from the people who actually do the work.

This is not an optional aspect of agentic AI deployment. The organizations that get this right are the ones that ask hard questions about human roles before deployment, not after. They understand that an agent without clarity about its human role will underperform, no matter how well it was built.

Closing

Where do we go from here

The eight patterns in this guide share a common thread. The advice shaping how organizations approach agentic AI was largely developed in a different context, for a different category of technology, and it has not kept pace with what agentic AI is and what it actually requires.

Agentic AI is a reimagining of how work gets done. That distinction changes everything. It changes what must be true before you begin. It changes who needs to be involved and when. It changes how governance is built, how humans are positioned, and how value is measured. And it changes what success looks like.

The organizations that will capture the most value from agentic AI in the coming years are those willing to challenge the assumptions they brought to this conversation. The ones willing to start before everything is perfect. The ones willing to bring their full organizations into the process rather than managing agentic AI as an executive initiative handed down from above. And the ones willing to reimagine their workflows entirely, rather than layering new technology onto old structures.

Multifamily has an enormous opportunity ahead.

What's next

Ready to break a pattern?

The patterns in this guide are not theoretical observations. They are what Linea works through with multifamily owners, operators, and PropTech leaders every day. If this guide shifted how you think about agentic AI in your organization, that shift is worth exploring further. The gap between understanding the patterns and knowing how to move past them is where real strategic work begins.

Linea's AI Advisory Services are designed for organizations ready to move past planning and into the actual work of bringing agentic AI into their business — understanding where your organization is, which workflows are worth reimagining first, evaluating the right tools and systems for your specific context, and supporting the pilot, deployment, and adoption of agentic AI to drive real transformation.