Automated valuation models (AVMs) are sold as solutions that eliminate human subjectivity. The algorithm does not see race, does not factor in appearance, does not make decisions under the influence of mood. It seems like the ideal tool for a fair market. However, the data tells a different story: in a number of documented cases, algorithms do not simply reproduce bias, they make it harder to detect and eliminate.
How Bias Emerges in a “Neutral” Algorithm
A machine learning algorithm learns from historical data. If that data reflects decades of discrimination in the real estate market, the practice of redlining, unequal access to credit, systematic undervaluation of housing in certain neighborhoods, the algorithm internalises these patterns as the norm.
The mechanism of bias can be indirect. The algorithm does not use race as a direct input parameter. Instead, it relies on proxy variables: neighborhood median income, crime rate, school ratings, density of ageing infrastructure. These variables statistically correlate with the demographic composition of the population and through them bias enters the model while maintaining the outward appearance of objectivity.
A study by the University of California, Berkeley documented that an AI mortgage lending system systematically charged Black and Latino borrowers higher rates compared to white borrowers with identical credit profiles. This occurred without any explicit discriminatory parameter in the model.
Documented Cases: What the Research Shows
Research conducted by Freddie Mac found that AVM systems are more likely to undervalue homes in neighborhoods with predominantly Black or Latino populations. The cause is not explicit discriminatory logic in the algorithm but the fact that it was trained on exactly that kind of data.
A separate study by HUD (the US Department of Housing and Urban Development) identified a persistent pattern: AVMs show significantly larger valuation errors in majority-Black neighborhoods compared to majority-white neighborhoods. Even after applying more sophisticated ML techniques and expanding datasets, the difference in error rate remained statistically significant.
Researchers publishing in Nature Humanities and Social Sciences documented another type of asymmetry: ML real estate valuation models systematically overvalue properties in higher-educated neighborhoods and undervalue those in lower-educated neighborhoods, regardless of the physical condition of the housing itself. An algorithm that fits well to existing market prices unintentionally locks in and amplifies social inequality.
Why This Is Harder to Fix Than Human Bias
The bias of a human appraiser is visible and can be challenged. The bias of an algorithm is hidden behind technical complexity. Even when the outcome is discriminatory, tracing which specific parameter or combination of parameters led to a lower valuation is technically difficult. This is particularly true in large neural networks where the decision-making logic is not directly interpretable.
US regulators in CFPB statements explicitly noted that algorithms are capable of masking biased inputs behind a facade of apparent objectivity, making detection and challenge more difficult.
| Parameter | Human Bias | Algorithmic Bias | | —– | —– | —– | | Visibility | Manifests in appraiser behavior that can be observed | Hidden inside a mathematical model, inaccessible to direct observation | | Scale | Limited to one specialist or team | Scales automatically across thousands of valuations simultaneously | | Explainability | The appraiser can explain their decision | Black box decisions are often not interpretable | | Challenge | The client can seek a second appraiser | Proving systemic bias requires a technical audit | | Regulatory protection | Fair housing laws cover human decisions | Algorithms were long considered “neutral” — the CFPB 2024 rule changed this | | Self-reproduction | Depends on the specific individual | Embedded in training data and reproduced with each new version of the model | | Detection | Through complaints, inspections, comparison of valuations | Requires dedicated statistical audit or access to source code |
The table highlights the core problem: algorithmic bias does not simply reproduce human bias, it makes it systemic, scalable and shielded by pseudo-objectivity. This is precisely why regulators in the US and EU have begun introducing specific requirements for AVM systems that never existed for human appraisers.
Want to understand how to build PropTech solutions that meet fairness requirements for algorithms and AI ethics? The ORIL Innovation team advises technology teams and developers on responsible AI implementation in real estate. Book a Consultation →
What Regulators Are Doing: New Standards in the US
In June 2024, six US federal regulators — CFPB, OCC, the Federal Reserve, FDIC, NCUA and FHFA — jointly approved a new rule: Quality Control Standards for Automated Valuation Models.
The rule requires companies that use AVM systems for mortgage lending to comply with five quality control standards: ensuring valuation accuracy, protecting against data manipulation, avoiding conflicts of interest, mandatory random model testing and compliance with anti-discrimination laws.
In parallel, HUD in July 2024 filed charges of racial discrimination against a lender, a real estate appraisal company and an appraiser for violations of the Fair Housing Act in the appraisal process. This sent a signal to the market: regulatory pressure in this area is growing and technological neutrality is no longer a sufficient defense.
EU AI Act and AVM Systems: What This Means in Practice
The EU AI Act, which entered into force in 2024 with phased requirements through 2026 and 2027, introduces a classification of AI systems by risk level. Automated real estate valuation systems that affect access to credit and mortgage products fall under the category of high-risk AI systems alongside employment screening, medical diagnostics and creditworthiness assessment.
What this means for PropTech teams developing or planning to bring AVM products to the European market:
- Mandatory documentation. Providers of high-risk AI systems are required to maintain technical documentation describing the model architecture, training data and bias risk assessment methodology.
- Transparency and explainability. The system must be capable of explaining the valuation result in understandable terms, not simply providing a number. This directly concerns the black boxes of deep learning.
- Human oversight. For high-risk systems, the possibility of human intervention and result review is mandatory. Fully autonomous AVMs without a challenge mechanism will not meet the requirements.
- Bias audit. Before market entry and on a regular basis thereafter, mandatory testing of the system for discriminatory outcomes across protected characteristics.
- Registration. High-risk AI systems are subject to registration in a dedicated EU database.
For teams currently building AVM products aimed at the European market, these requirements should be built into the architecture at the design stage, as retrofit compliance is significantly more expensive.
Technical Approaches to Reducing Bias
Researchers and PropTech teams are developing several directions for technical solutions.
- Model auditing. Regular review of valuation outputs for systematic deviations by demographic characteristics or geographic zones. Comparison of AVM results against independent appraisals.
- SHAP values and explainable models. Technical tools that make it possible to understand which specific parameters most influenced a given valuation. They increase transparency and enable the detection of suspicious patterns.
- Alternative data sources. Incorporating non-traditional data rather than relying exclusively on historical transactional data that reflects past discriminatory practices. For example, the physical condition of infrastructure from satellite imagery, actual market activity and similar sources.
- Hybrid approaches. Combining algorithmic valuation with human review in borderline cases or when results deviate significantly from the expected range.
It is important to understand that the problem of bias in AVMs is addressable through technical methods but requires a deliberate decision by the team that designs and implements the system.
Checklist: How to Test an AVM Model for Bias
For PropTech teams developing or auditing automated real estate valuation systems:
– [ ] Training data audit. Verify whether training data includes transactions from neighborhoods that have historically experienced discriminatory practices. Is the sample balanced across geographic and demographic dimensions?
– [ ] Testing across protected groups. Compare valuation results for similar properties in neighborhoods with different demographic compositions. Is there a statistically significant difference in model error?
– [ ] Proxy variable review. Which variables does the model use as inputs? Are any of them metrics that statistically correlate with race, ethnicity or social status (for example, neighborhood median income, crime rate)?
– [ ] SHAP analysis or equivalent. Can you explain which specific parameters most influenced a given valuation? If not, the model requires further work on explainability.
– [ ] Regular retesting. Bias can emerge after the model is retrained on new data. Auditing is not a one-time measure but an ongoing process.
– [ ] Challenge mechanism. Is there a provision for human review of a result? Can the client challenge a valuation and receive an explanation?
– [ ] Regulatory compliance. For the US market: compliance with the CFPB 2024 AVM rule. For the EU market: review of EU AI Act requirements regarding system classification and mandatory documentation.
– [ ] Methodology documentation. Is the model training methodology, data selection and audit results documented internally and available for regulatory review?
Why This Matters for PropTech Teams Outside the US
Algorithmic bias is not a phenomenon limited to the American market. Any team developing or deploying an AVM system trained on historical market data faces the same risk: if the market from which the data was collected had structural inequalities, the model will internalise them.
For the European PropTech market, this question takes on particular significance in the context of the EU AI Act, which introduces the category of high-risk AI systems and automated real estate valuation systems that affect access to credit fall squarely within that category.
Responsible design of AVM systems is not only a matter of regulatory compliance. It is a matter of trust in PropTech as an industry and in AI as a tool that should improve rather than reproduce the injustices of the market.
Want to follow developments in AI ethics in PropTech and responsible technology implementation in real estate? Listen to the Innovation Blueprint podcast — conversations with industry practitioners on the technologies shaping the future of the Built Environment. Listen to Innovation Blueprint →
Algorithmic bias in real estate valuation sits at the intersection of technology, law and social justice. It requires attention not only from regulators but from teams building PropTech products: at the design stage, in the selection of training data and in model architecture. The decisions about which data to train an algorithm on and how to verify its fairness are made by specific people and it is they who determine whether AI becomes a tool for a fairer market.
Sources:
- CFPB. Quality Control Standards for Automated Valuation Models. June 2024
- Debevoise & Plimpton. Federal Regulators Approve New Rule on AI Use and Bias Risks in Real Estate Valuation. July 2024
- HUD. Fair Housing Act Charge: Racial Discrimination in Appraisal Process. July 2024
- Freddie Mac. Racial and Ethnic Valuation Gaps in Home Purchase Appraisals
- HUD User / Cityscape. Racial Disparities in Automated Valuation Models. Vol. 26, No. 1
- Nature Humanities and Social Sciences. Asymmetric Impacts of AI on Housing Price Valuation Across Education Levels. December 2025
- University of California, Berkeley. Research on AI mortgage systems and racial disparities
- UChicago Kreisman Initiative. AI is Making Housing Discrimination Easier Than Ever Before. February 2024
