Judgment.
The credential proves technical competence. It says nothing about the rest.
I’ve interviewed candidates for data roles who were technically excellent - SQL sharp, portfolio polished, confident in every answer.
Some of them didn’t ask a single question about the business, the team, or what the role actually needed.
Not because they were nervous. Because they genuinely didn’t think it was relevant.
They had the credentials. They’d done the work to build them. And somewhere along the way, nobody had told them that being technically right and being professionally effective are two different things - and confusing them is expensive. The cost shows up quietly: correct work that doesn’t move decisions, a recommendation dismissed in the room, a career that plateaus without a clear reason why.
That gap has a name. It’s judgment. And almost no data program explicitly teaches it.
The standard response to this is to say that data professionals need better “soft skills” - communication, collaboration, stakeholder management. That’s not wrong. But it’s incomplete in a way that matters.
The reason judgment doesn’t make it into most curricula isn’t that instructors don’t value it. It’s that the standard tools of instruction simply can’t touch it.
SQL exercises have right answers. Tableau assignments have rubrics.
Judgment doesn’t fit either container. You can’t auto-grade the decision to escalate a finding before you’ve fully investigated it. You can’t write a learning objective that says “decides when correlation is being mistaken for causation, and says so, clearly, without losing the room.” So it gets skipped - not from negligence, but because the entire infrastructure of course design is built for assessable outcomes. Judgment slips through every time.
AI is making this more expensive to ignore. It can write the SQL, generate the chart, run the segmentation. What it cannot do is tell you whether the spike in your data happened because of your campaign or because a competitor went offline for six hours.
When AI handles the technical baseline, the human’s job shifts. You’re no longer the calculator. You’re the circuit breaker - the one who decides what the numbers actually mean, and what to do when they don’t say what someone needs them to say.
The technical baseline is collapsing. The judgment gap is widening.
Most data professionals will hit some version of this scenario. See if it sounds familiar.
An analyst is asked a direct question: did the Q4 email campaign drive the spike in online sales? Leadership needs an answer before Thursday’s budget meeting.
While pulling the data, the analyst notices something: the spike started two days before the campaign launched. So now there are two problems landing at once.
The first: something is off, and they don’t know what yet.
Do they flag it now - with incomplete information and four days to go - and risk raising a false alarm? Or do they keep digging and risk the decision being made without the full picture?
The second: even setting the timing issue aside, “did the campaign cause the spike?” is a harder question than it looks.
There was a competitor promotion that week. Q4 seasonality was already trending up. A product got picked up organically on social. The campaign may have contributed - but the data can’t cleanly separate these threads. Any answer that says “yes, the campaign drove it” is technically defensible. It is also a lie of omission.
Two judgment calls, simultaneously.
When do you raise your hand?
And do you answer the question as asked - which is what leadership wants - or do you reframe it and risk becoming the person who never gives a straight answer?
That second tension is where it gets hard. Being transparent about uncertainty is the right call. It’s also the one that can get you labeled as someone who hedges everything and commits to nothing.
The analyst explains why correlation isn’t causation, adds the context, flags the limitations. Sometimes the stakeholder hears all of that and just hears: we don’t know. That’s a credibility hit, even when you’re right.
Judgment is knowing when to lead with the caveat and when to lead with the direction - and how to hold both without misrepresenting the data. No course teaches that. No rubric builds it.
Most programs don’t have a deliberate process for developing this.
A good instructor weaves it into discussion; a less experienced one skips it entirely because it doesn’t fit the lesson plan.
The result is graduates who are technically solid and judgment-thin - and organizations that are surprised when those same graduates can’t navigate a stakeholder question under pressure.
Job postings don’t help. They list twelve technical requirements and maybe one line about “strong communication skills.” They never say “can decide under uncertainty” or “knows when the question being asked isn’t the right question to answer.” Programs mirror what the job market signals, and the job market signals technical proficiency - because that’s what’s easy to specify, screen for, and assess.
Everything else gets handled in the first year on the job.
Or it doesn’t.
The fix isn’t a new course. It’s a different kind of exercise - one built specifically for a skill that has no single right answer.
Judgment exercises work differently than standard activities.
There are multiple defensible positions.
The learning happens in the debrief, not the deliverable.
And the goal isn’t for learners to get it right - it’s for them to see where their reasoning held up and where it didn’t, in conversation with people who made a different call from the same information.
Here’s the prompt I’d use to build one:
I want to design a short exercise that develops professional judgment in data work - not technical skills. The specific judgment I want to develop: [e.g., deciding when to escalate a finding before fully investigating it, or when to reframe a question rather than answer it as asked] Design an exercise that: - Presents a realistic scenario with no single right answer - Requires learners to make a judgment call and defend it - Surfaces the reasoning behind different choices - not just the choice itself - Can be debriefed in 10 minutes in a group setting Include the scenario, the learner task, two or three likely positions learners will take, and the debrief questions that help them see where their reasoning held up - and where it didn’t.
Here’s what that prompt generates when applied to the scenario above:
Scenario: It’s Tuesday. The Thursday budget meeting is in 48 hours. Leadership needs to know if the Q4 email campaign drove the sales spike. You’ve found the problem - the spike started two days before the campaign launched. You don’t know why yet.
Learner task: Do you flag it now, with incomplete information and two days left to investigate? Or do you keep digging and risk the meeting happening without the full picture?
Position A: Flag it now. Silence is also a choice. The cost of a budget decision made on bad information is higher than the cost of a preliminary heads-up that turns out to be nothing.
Position B: Keep investigating. A premature flag that goes nowhere is its own credibility problem. Get more information before you raise the alarm.
Debrief questions:
What would you need to know before you’d switch positions?
If you flag early, what’s the exact language - and what are you committing to?
When does “I need more time” become a delay strategy rather than a real standard?
The debrief questions are the point. That’s where a learner discovers that a colleague with the same data made a different call - and had solid reasons for it. That’s the moment that builds instinct faster than any technical exercise.
Run it once.
See what surfaces.
There’s no right or wrong answer here.
But you’ll learn quickly what your learners actually know - and what they’ve been assuming they’d figure out on the job.
Interested in more of this? Every week in Teach Data with AI, I write about curriculum decisions, real AI workflows, and the parts of data work that don’t fit neatly into a course outline. Every reader is a reminder it’s worth continuing.Subscribe here.



Excellent article Donabel.