Photo from Unsplash
Originally Posted On: https://www.vistra.com/insights/us-real-estate-managers-lead-ai-not-data-foundations
AI is transforming how US real estate managers think about investment insight and operational efficiency. Yet without a common language for their data, even the most advanced models risk automating inconsistency rather than eliminating it. In this piece, the third in our global series on regional trends in the real estate space, Marlyn Ramirez builds on earlier perspectives from APAC and EMEA to explore why standardization, not just innovation, must be the foundation of any real progress in AI-enabled real estate management.
Earlier in this series:
- Part 1: APAC real assets: turning market friction into opportunity for global investors
- Part 2: When the data doesn’t add up: How fragmented systems are holding back European real estate
Now, we turn to the US market.
Many US real estate managers are already deploying AI across their businesses, but too often they still rely on fragile, manual data foundations that limit both the speed and reliability of the insights they deliver to investors.
In Data at the Crossroads,a recent survey conducted by Vistra in collaboration with Funds Global Intelligence, 40% of North American respondents describe their firm’s data as excellent and another 50% say it is good, meaning nine out of ten believe they already have strong data. At the same time, North American managers say they most urgently need high-quality data for financial and performance reporting and for risk management, while also pointing to a lack of standardization, legacy systems and data security concerns as major barriers to using that data effectively. That misalignment between confidence in day-to-day data and recognition that the underlying plumbing and governance are not where they need to be is the focus of this insight.
Leading on AI, lagging on data
North American real estate managers (primarily US, plus Canada) report some of the highest levels of AI-related ambition in the survey. Across the full sample, 49% of respondents say they expect to develop or significantly improve AI and machine learning in their roles over the next 12 months, and North American firms are more likely than their European peers to put AI at the top of their technology agenda, alongside strong current use of big data and advanced analytics.
What the same data also show, though, is that many of the foundations for this transformation are still missing. North American respondents highlight familiar obstacles as material constraints on their data strategies, including data security and privacy concerns, lack of standardization, poor data quality and outdated or unintegrated technology infrastructure. In my work with US managers, that looks like an industry racing ahead on use cases and tooling, while the basic structure of how data is defined, exchanged and governed has not yet caught up.
From the outside, investors see a cutting-edge tech stack; from the inside, you can still see spreadsheets and one-off mappings holding everything together. There is real automation in discrete tasks such as invoice capture and data entry, but the flow from property to fund remains heavily dependent on manual intervention and human judgment.
Why the data still “feels” good enough
The report findings help explain why so many US leadership teams feel comfortable with their data. In North America, 40% of respondents rate their firm’s overall data as excellent and 50% as good; in specific domains such as asset valuation, portfolio optimisation, risk management, investor reporting and ESG, at least around four in ten describe the data they use as high quality and most of the rest call it moderate. On the face of it, this looks like a very strong position.
From the perspective of a senior leader reading a polished quarterly deck, that confidence is understandable. By the time the numbers reach the C-suite, teams of accountants and analysts have already spent weeks cleansing, reconciling and representing them for a specific stakeholder. I can appreciate why there is a view that “our data is great because a group of accountants have already spent the time cleansing it, making sure it works for a specific stakeholder’s benefit.” Executives see well-designed dashboards and understandably conclude that the underlying data model is equally robust, without always seeing the effort and fragility underneath.
What those dashboards do not show is how brittle and labour-intensive the process can be. In one US engagement, we took trial balances from more than 50 property managers and had a team of about ten accountants doing nothing but GL mapping in Excel, line by line, across thousands of accounts to fit a central chart. From the outside, the reporting looked sophisticated; from the inside, it was held together by human expertise and a lot of spreadsheets.
The hidden weaknesses behind clean reports
The research highlights the structural weaknesses sitting beneath these reassuring perceptions. Across the global sample, significant proportions of respondents cite security and privacy concerns, lack of standardisation, poor data quality and legacy systems as barriers to using data more effectively, and North American managers are not exempt from those concerns. In a typical US real estate fund structure, those issues are magnified by the sheer number of participants touching the same underlying facts: property managers, JV partners, asset managers, fund administrators, auditors, tax advisers and valuation specialists, many using different charts of accounts and general ledger mappings.
In practice, that means the same economic reality can look very different as it moves through the chain. I regularly see capital expenditure coded as an operating expense or even treated like cash at the property level; from a GAAP perspective those amounts belong on the balance sheet, so a single miscoded multimillion-dollar project can materially distort both performance metrics and asset values if it flows unchallenged in the first pass into valuation models and investor reports. When basic items like CapEx, revenue and operating expenses are not treated consistently, you can end up with investor reports that look clean on the surface but are built on very uneven foundations.
The operational consequences are significant. In a typical US private real estate fund structure, property managers may have 30 days to close their books, JV partners add further delay, and the fund administrator then spends weeks normalising and reconciling data from multiple property managers and systems. As a result, it can take around 90 days, and sometimes up to 120 days, to produce a finalised financial statement for investors. During that time, managers know that any “as of today” answers about AUM or risk exposures are based on approximations and stale information.
AI outpacing data readiness
Against this backdrop, AI and analytics programmes are being rolled out into an environment where even basic financial concepts are not consistently defined across firms. The Data at the Crossroads survey shows strong appetite among North American managers to expand AI, big data and predictive analytics, suggesting that more models, agents and tools will be layered on top of existing systems rather than fundamentally reengineering the underlying data model.
There are real efforts under way to train machines and use AI, but until we agree on the definitions of basic data, we cannot fully trust those models to interpret trial balances or general ledger data. In accounting, most repetitive work in the US is still driven by Excel and macros; AI agents can absolutely take that on, but only if what comes out of systems is consistent and machine-readable, not a patchwork of bespoke mappings. Without a common language, AI risks automating inconsistencies instead of eliminating them.
What investors and regulators expect
The report findings show where North American managers feel the greatest pressure to get data right. In this region, respondents say the areas that most require high quality and accessible data are financial and performance reporting, risk management and investor reporting, with risk data scoring noticeably higher than in Europe or APAC. That reflects the reality I see in additional investor requests for details as investor mindset is shifting towards greater transparency.
US investors and regulators are focused on auditable, GAAP-aligned information, and if common concepts like capital expenditures, revenue and operating expenses are not defined the same way across funds and entities, it becomes very hard to give them the confidence they expect. The more detailed and frequent the questions about lookthroughs, leverage and ESG become, the more exposed managers are if the building blocks in their general ledgers are inconsistent. When detailed property-level charts containing hundreds or thousands of lines are compressed into a handful of rolled-up lines for investors, managers also lose the ability to unpick the information later if new analytical questions emerge; once many-to-one mappings are applied, the original granularity cannot be recovered without going back through another round of manual work.
A universal chart of accounts: a practical solution
One of the most promising solutions now emerging is the development of a universal chart of accounts for real estate, which Vistra is driving as part of a broader data infrastructure initiative. The goal is to move beyond each firm’s bespoke GL and agree a common language for core financial and operational concepts that can be used across property managers, fund administrators, auditors, tax providers and investors.
Today, one firm might code operating cash as “1000” and label it “Cash – Bank Account 1”, while another uses “1100” with a completely different description; under a universal chart, both would map to a shared reference definition that humans and machines can interpret consistently. Cash is just one example: there are likely to be hundreds or thousands of data points that need common treatment, from rent to fund expenses to ESG-related metrics, and every inconsistency at that level ripples through to the investor’s view of performance and risk. When roughly a third of respondents globally name lack of standardisation as one of the biggest barriers to using data more effectively, alongside legacy systems and data security concerns, it reinforces how critical a common language has become for US-led North American managers too.
Building a shared data dictionary and semantic layer
The universal chart of accounts is only one piece of the puzzle. To move forward constructively, we also need a shared data dictionary and a semantic ontology layer that allows systems to reason about similarities and differences across firms. Once we agree on formal definitions for core concepts, semantic technologies can help AI systems infer that accounts labelled “cash 1” or “operating cash” across multiple ledgers all belong to the same universal cash category, even if the local labels differ.
In my view, this combination of standardised vocabulary and machine-assisted reasoning is what will finally allow us to move away from massive Excel files emailed around the industry. Instead, stakeholders should be able to access a portal, specify the parameters they want and download consistent, machine-validated data at will, much as we now retrieve information from the internet without worrying about the plumbing underneath. For US managers, that would mean investors and regulators can pull the views they need directly from a reliable, shared backbone instead of waiting months for bespoke reporting packs.
From months to days: what good looks like
I have seen how powerful this transformation can be in practice. In a previous role working with Yardi’s investment accounting module during its beta phase, we were able to close the books for a large fund structure in a day and produce automated financial statements within three days once mappings and automation were fully in place. By contrast, the current industry norm of 45–90 plus days reflects not inherent constraints, but the cumulative delays of manual cleansing, back-and-forth queries and bespoke presentation requests.
Technically, once property managers close their books for the month and data flows seamlessly between systems, you should be able to push a button, submit your data to the asset manager and fund administrator, and have them review and sign off within days, not months. Faster, more reliable data would transform due diligence, risk monitoring and investor communications, and would make it much easier to respond to “as of today” questions about AUM and exposures without falling back on stale estimates.
Strengthening US data foundations: a practical agenda
For US managers, strengthening real estate data foundations starts with deciding what the universal language of data is going to be: a common chart of accounts and shared definitions for leases, CapEx, ESG metrics and other core items. Without that, every new system just adds another translation layer and every new AI initiative risks automating inconsistencies instead of eliminating them.
The next step is modernising infrastructure in an environment where a substantial minority of firms still point to outdated or unintegrated systems as a barrier, including making better use of capabilities from platforms that allow more seamless data exchange between instances. Finally, managers need to put real data governance in place; good data governance is not an IT bolt-on, and in my experience, firms that treat it as a core part of their operating model are the ones that can respond fastest and most credibly to investor and regulatory scrutiny.
Partnering to close the gap
Not every manager has the scale or appetite to build all of this in-house. Over the past few years, I have seen the use of third-party administrators in US real estate rise significantly from a relatively low base, driven as much by data and technology complexity as by cost. When my team works with clients on Yardi, for example, our in-house technical specialists can enhance their platform, standardise mappings and optimise workflows in ways that improve the experience for both our staff and theirs.
For many US managers, partnering with a specialist is more practical than trying to design and maintain a complete data and technology engine alone. But regardless of who runs the systems, the principles are the same: define a common language, build the right infrastructure and governance around it, and then let AI and advanced analytics operate on foundations that are worthy of the investors and data stakeholders who depend on them.
To download the full Data at the Crossroads report, click here.
About the author
Marlyn Ramirez is a senior real estate and fund services specialist at Vistra, with deep experience helping US managers modernise their data, reporting and technology infrastructures. She works at the intersection of accounting, systems and operations, advising clients on how to standardise property to fund data flows and make better use of platforms such as Yardi. Drawing on hands-on work with complex US fund structures, Marlyn is also leading the charge in an industry-wide effort, alongside key industry collaborators, to establish a universal chart of accounts that will transform how the real estate sector exchanges data, laying the groundwork for truly scalable, AI-ready data foundations.




