In 2019, a study by MMC Ventures found that 40% of European startups classified as "AI companies" didn't actually use artificial intelligence in any meaningful way. That was before the generative AI boom, before ChatGPT made every startup add "AI-powered" to their homepage, and before Builder.ai proved you could raise $450 million by calling 700 engineers in India an "AI assistant." The number today is almost certainly higher. Here's how to tell if a company is real or performing.
1. What model do you use, and do you own it? Most "AI companies" are thin wrappers around OpenAI's API. There's nothing wrong with that as a business model, but it means their competitive moat is a prompt and a UI. If OpenAI changes pricing, changes their API, or builds the same feature natively, the wrapper company evaporates overnight. Ask: do they have a proprietary model, a fine-tuned model, or are they reselling someone else's intelligence?
2. What happens to the product if you remove the AI? This is the most revealing question. If the answer is "the product still works, just slower" — the AI is a feature, not the product. If the answer is "the product ceases to function" — the AI is real and core. Most companies are the former. They have a perfectly functional SaaS product with an AI label stapled to it because investors are paying 42% higher valuations for anything with AI in the pitch deck.
3. What are your compute costs as a percentage of revenue? Real AI is expensive to run. If a company claims to be doing meaningful AI inference and their compute costs are under 10% of revenue, they're either extraordinarily efficient or they're not doing what they claim. OpenAI spends $1.4 billion annually on compute alone. If a startup tells you their AI costs are trivial, ask what's actually running on the backend.
4. How is your training data sourced and maintained? This separates the real players from the performers. Companies like Surge AI exist specifically because training data quality is hard and expensive. If a company can't articulate where their training data comes from, how it's labeled, and how the model is retrained, they probably don't have a model worth discussing. The data pipeline is the product — if it doesn't exist, neither does the AI.
5. Can you show me the model's output without the UI layer? This is the Builder.ai test. If the AI is real, the company should be able to demonstrate raw model output — predictions, classifications, generated content — separate from the polished interface. If they can only show you the finished product and can't pull back the curtain, there may not be anything behind it.
The AI-washing epidemic isn't just a branding problem. It's a capital allocation problem. In 2025, $16.7 billion went into proptech VC alone — a 68% year-over-year increase — with four new unicorns, all AI-native. How many of those companies would survive the five questions above? How many of the 51,000 workers laid off this year were fired so their employers could redirect budget toward "AI transformation" that amounts to an OpenAI API key and a chatbot widget?
I run PropTechUSA.ai. The name has AI in it. I use AI — Claude, specifically — as a core tool in building technology, generating content, and operating the business. But I don't call myself an "AI company" because the value I deliver isn't the AI. It's the outcomes: working websites, functional tools, real estate intelligence, transparent technology. The AI is how I build. It's not what I sell. That distinction matters.
When the bubble pops, the companies that survive will be the ones that can answer all five questions honestly. The ones that can't will join Builder.ai in bankruptcy court, and their investors will pretend they never saw it coming. They did. They just didn't ask.