9 Questions Your AI Vendor Can't Answer

Published April 22, 2026

Okay so, I've been doing this AI stuff for a while now, and one thing I've noticed is how many folks out there are selling "AI solutions" without really, truly understanding what they're doing. It's kinda like buying a custom car from someone who mostly just paints stock models. Looks good on the outside, but under the hood? Maybe not so much.

I get a lot of calls from businesses who got talked into some big AI project that just isn't delivering. Or worse, it's costing a fortune and doing less than a well-programmed spreadsheet. So I put together this list of questions that, honestly, most of those slick-talking AI vendors can't answer. If they can't give you a straight, specific answer to these, you might wanna walk away. Seriously.

1. What's the specific, measurable uplift you guarantee for our bottom line in the first 90 days?

This one really separates the wheat from the chaff. A lot of AI projects get pitched as "efficiency gains" or "enhanced customer experience." That's nice, but what does it actually mean for my money? I mean, I'm talking actual numbers. If you're selling me an AI that optimizes my ad spend, I wanna know if you can commit to, say, a 5% reduction in CPA or a 10% increase in ROAS within three months. If they start talking about "long-term strategic value" or "intangible benefits," that's a red flag. Real AI should impact your P&L, not just your buzzword bingo card.

2. Can you show me the actual data pipeline, from raw input to your AI's decision output, for a similar client?

This is where things get technical, and that's exactly the point. A vendor should be able to walk you through, step-by-step, how data flows into their system. I'm not asking for proprietary code, but I want to see a clear diagram or explanation. Where does the data originate? How is it cleaned and transformed? Which specific algorithms are being applied at each stage? What's the latency? If they just wave their hands and say "we use machine learning to process your data," that's not good enough. I wanna know if they're using, say, a dbt pipeline for transformations, a specific model like XGBoost for predictions, and then outputting to a Kafka stream. Specifics, please.

3. What specific features or data points did your AI not use, and why?

Explainability is huge, especially when things go wrong. A good AI engineer knows not just what their model uses, but also what it discards and the reasoning behind it. Maybe a feature had too many missing values, or it introduced too much multicollinearity, or maybe it simply didn't add predictive power. For example, if you're trying to predict customer churn, and they tell you their model ignored a customer's last_login_date, I want to know why. Did they find it wasn't significant? Was it too noisy? This question tests their understanding of feature engineering and model robustness, not just their ability to train a generic model.

4. How do you handle model drift, and what's your retraining schedule or mechanism?

AI models, especially those dealing with real-world data, don't just stay perfectly accurate forever. Things change: customer behavior shifts, market conditions evolve, new product lines launch. This is called model drift. I wanna know if they have a clear plan for detecting when their model's performance starts to degrade. Are they monitoring specific metrics like AUC, precision, or recall? How often do they retrain the model – is it daily, weekly, monthly? Do they use automated pipelines like Kubeflow or MLflow to manage this? If they don't have a solid answer here, you're essentially buying a solution that will slowly but surely get worse over time without you even knowing it.

5. What's the exact dataset size, in rows and columns, used to train the proposed model, and how was it sourced?

"We trained it on a lot of data" just doesn't cut it. I need specifics. Was it 10,000 rows? 10 million? 100 million? And how many features (columns) did each row have? More importantly, where did this data come from? Was it scraped from the internet, purchased from a third party like Experian, or was it your own historical data? The quality and quantity of training data are absolutely foundational to an AI model's performance. If they can't tell you, say, "we used 5 million rows of anonymized transactional data, each with 30 features, sourced from 10 similar e-commerce businesses," then they're probably just using some off-the-shelf pre-trained model with limited applicability to your specific context.

6. Can you describe a specific instance where your AI model failed and what you learned from it?

Nobody, and I mean nobody, gets it right 100% of the time. Especially not with AI. A vendor who only talks about their successes is either lying or inexperienced. I want to hear about a time their model gave a bad prediction, made a wrong decision, or completely messed up. How did they detect it? What was the impact? And most importantly, what changes did they make to prevent it from happening again? This shows humility, experience, and a genuine understanding of the iterative nature of AI development. If they can't recall a single failure, they haven't been in the trenches long enough.

7. What's the ongoing maintenance cost after the initial implementation phase, broken down by hour or resource?

Initial implementation costs are just the tip of the iceberg with AI. These systems need ongoing care. I'm talking about things like data pipeline maintenance, model monitoring, retraining, software updates, infrastructure costs (AWS, GCP, Azure charges). Is it a fixed monthly fee, or is it hourly? How many hours per month are typically required for upkeep? Will I need to hire an in-house data scientist to manage it, or is that included? If they give you a vague "it's minimal" answer, push back. I want numbers. I want to know if it's gonna cost me an extra $1,000 a month for compute and $500 for maintenance hours, or if they just expect me to figure it out.

8. What's your proposed fallback or manual override plan if the AI goes completely haywire?

AI isn't perfect, and sometimes things go sideways. What's the plan B? If your AI that handles customer service queries suddenly starts giving nonsensical answers, or your inventory optimization AI suggests ordering 10,000 units of something you don't sell, what happens? Is there a clear, immediate way to switch to a human-driven process? What's the alert system? How quickly can it be reverted? A good vendor thinks about these edge cases and has a robust contingency plan. If they haven't considered this, they're probably not thinking about the real-world operational impact of their solution.

9. Who on your team, specifically, will be the primary technical contact for my team, and what's their background?

When you're dealing with technical systems, you need a technical point of contact, not just a sales rep or a project manager. I wanna know the name of the person who's actually gonna be involved in the nitty-gritty. What's their experience? Do they have a computer science degree, or did they take a bootcamp? Have they worked on similar projects? It's about knowing who you can call when things get complicated. If they say "our support team" or "whoever is available," that's a sign they might not have the depth of expertise you need, or they're going to shuffle you around when you have a real technical question.

Alright – that's the list. Other ones I almost included: "How do you handle data privacy and compliance (GDPR, HIPAA, CCPA)?", "What's the typical time-to-value for a client of our size?", and "Can you provide two client references I can call right now?".

Want help figuring out which of these fit your business? Book a 20-min call.


Want help figuring out which of this applies to you?

20 minutes, no deck. I'll be straight if I can help.

Book a 20-min call