Code Smells Are Signals, Not Sins — And Patterns Won’t Save You
Most interview loops still test whether you can build a list view.
Reality tests whether your system can survive change.
That’s where code smells matter—not as moral judgement (“bad dev”), but as risk signals: indicators that the design is drifting toward higher maintenance cost and lower change velocity. The research literature is surprisingly aligned on that framing: smells aren’t bugs (the app still runs), and they’re not guaranteed anti-patterns either—they’re hints that something is off and may become expensive later. Paper #1
This post is a practical synthesis of two papers:
- A broad literature review that scanned 114 studies (2009–2022), summarized common smells, and mapped detection approaches and tools. Paper #1
- A more focused paper on what happens when design patterns and code smells co-occur, and what that does to internal quality. Paper #2
The punchline:
Smells are often manageable alone; they get dangerous when they cluster. And patterns don’t magically prevent that clustering.
1) The part people miss: smell clusters matter more than individual smells
A key idea in the co-occurrence literature is that the maintenance pain isn’t driven by one “Long Method” hiding in the weeds—it’s driven by agglomerations of smells that reinforce each other (the paper cites multiple studies pointing to that direction). Paper #2
Why this matters for senior engineers:
- Juniors talk about single refactors.
- Seniors talk about systemic drift: how small local compromises accumulate into global friction.
That’s also why code review culture differs between teams that ship and teams that demo: shipping teams learn to treat smells as leading indicators.
2) Smell taxonomies are useful, but only if they change decisions
The 2024 review calls out a common taxonomy (Fontana et al.) that groups smells into buckets like Bloaters, OO Abusers, Change Preventers, Dispensables, Couplers, Encapsulators. Paper #1
That taxonomy becomes valuable when it guides what you do next.
Here’s how I translate those buckets into “survives reality” heuristics:
Bloaters → “This component is becoming a product”
If a view model or coordinator keeps absorbing responsibilities, it will eventually become the place where deadlines go to die.
Change Preventers → “Every feature causes shotgun surgery”
If one change forces edits across many files, your system doesn’t have a change boundary—only a hope boundary.
Couplers → “This module knows too much”
Feature Envy is a symptom that your domain boundaries are wrong, not that someone needs to rearrange functions.
This matters because the review found common smells repeatedly studied include Long Method, Feature Envy, and Duplicate Code. Paper #1
Those aren’t exotic problems. They’re everyday problems that quietly tax delivery.
3) Detection is still mostly “classic” (metrics/heuristics), not AI magic
If you think the field is already dominated by ML-based smell detection, the data says otherwise.
In that review: Paper #1
- ~72.8% of studies used non-ML approaches (heuristics/metrics), and ~21.92% used ML techniques.
- It also notes an expanding tool landscape (they report 87 tools across the review).
- And it calls out specific ML algorithms that performed well in the reviewed set (e.g., Random Forest and JRip).
So yes—AI is coming for everything—but this domain is still mostly governed by:
- clean boundaries,
- good measurement,
- and boring discipline.
Which is exactly what hiring managers want seniors to demonstrate.
4) Design patterns can help… and still correlate with smell problems
Patterns aim to reduce coupling and improve flexibility, but the co-occurrence paper makes the point that researchers study patterns and smells together specifically because patterns aren’t a guaranteed quality win. Paper #2
My practical take:
Patterns are like leverage tools.
Use them with clear boundaries and they amplify good structure.
Use them as decoration and they amplify complexity.
The “pattern-as-deodorant” failure mode
A system can be “patterned” and still smelly if:
- responsibilities are unclear,
- abstractions are premature,
- boundaries don’t match the direction of change.
In interviews, many candidates can name patterns.
Very few can explain when a pattern is the wrong move.
That’s a senior signal.
5) A senior-level workflow: treat smells as drift signals, then respond with process Paper #1
Here’s the workflow I’ve seen work in high-output teams:
Step 1 — Detect drift early (lightweight)
Pick a small set of smells you care about (the “usual suspects” are popular for a reason).
Step 2 — Ask “what change does this smell predict?”
Smells matter because they predict future pain, not because they’re ugly.
Step 3 — Refactor surgically, but fix the boundary, not the symptom
If you only shorten a Long Method without changing responsibility boundaries, you’re doing gardening, not engineering.
Step 4 — Watch for co-occurrence (clusters)
When smells cluster, it’s often pointing at a deeper design issue. The research direction is explicit about cumulative effects being potentially more dangerous than single categories.
Companion code idea for this post (repo-friendly, interview-useful)
If you want a code artifact that doesn’t feel like “junior list view demo,” build this:
A “Smell Radar” mini-tool + playbook repo
- A tiny Swift Package that:
- parses Swift files (even naive regex + metrics is fine for v1),
- emits a JSON report with:
- file/function size,
- parameter counts,
- duplication hints,
- “touch count” across modules for a given change-set (git diff based).
- A /playbook folder with:
- “When this smell shows up, here are 3 refactor options + tradeoffs”
- “When two smells co-occur, here’s what it usually means architecturally”
That demonstrates:
- measurement,
- process,
- and architectural thinking—without pretending you’re inventing SonarQube.
Closing: what this says in an interview
If someone asks you to build a list view live:
You can do it—but you can also say:
“I’ve built that a thousand times. What I’d rather show is how I keep a codebase healthy under change: how I detect design drift, prevent smell clusters, and choose refactors that improve change boundaries instead of just rearranging code.”
Then point them to the post + repo.
That’s senior.