#3 Developers
I’d like to extend an olive branch to the developers out there.
And then beat you with it
Where do I even begin? Let’s start with the good—because that’s always a goodish place to start.
Masterminds, But What Kind?
Developers are insanely talented, essential resources. Y’all learned the sacred art of making computers do things—actual magic in my book. You’re more like wizards than Code Monkeys, to be honest. The creativity many of you have is plastered across the digital world we live in. None of the interfaces, services, or software we rely on would exist without your sweat, patience, and high tolerance for nonsense.
Coding, to me, is the purest form of the Information Technologist profession. You understand how systems talk to each other in ways most of us couldn’t even begin to imagine.
And yet… why you no communicate so well with InfoSec??
Under Pressure
The pressure on developers is absolutely disgusting. You can thank the #1 threat on our list (more on that later!) for that. The expectations on developers are insane, and InfoSec’s right there in the same boat. Both sides are supposed to be experts, following best practices handed down from mentors, certification bodies, and industry gurus. In a perfect world—with the right resources and enough time—developers and InfoSec could crank out deliverables that would make any requestor downright ecstatic.
But here’s the kicker—the requestors aren’t exactly “normal.”
And yeah, I know what you’re thinking—having InfoSec and developers claim to know what’s “normal” is… ironic. Touché.
In our world, though, the folks who actually know how the beeps beep and the boops boop are the “normal” ones. It’s the business folks—the ones trying to pull ideas out of thin air without even knowing how the hickey’s doohickey—who spark all the communication breakdowns.
The business side always wants something shiny—a product, service, or feature they think will unlock magical value for clients or customers. But the problems start with how those requests are communicated, the resources needed to make them happen, and the laughably tight timelines they expect.
While the business is off selling dreams, making promises, and keeping customers happy, the internal engines—developers—are getting fed low-octane gas in the form of poor management.
Then you throw in client demands, compliance requirements, and security needs, and now developers are swimming in a pool of conflicting priorities, still expected to deliver at full speed as if nothing’s wrong.
Most developers are happy to code away, building solutions when they’ve got the right setup. But when deadlines are squeezed, scope-creep kicks in, and requirements get fuzzy, things tend to go sideways. Best practices get tossed aside, and the focus shifts to just getting something out the door—creativity, quality, and pride in the work be damned.
Do some developers develop apathy? Absolutely. Just like some InfoSec folks do. At the end of the day, you have to ask: whose risk is it anyway? If the organization’s priority is hitting a deadline over delivering quality, then why should the developer be left holding the bag, wrestling with the guilt of not meeting a higher standard all on their own?
Defiance
InfoSec and compliance teams could be some of the biggest allies for developers. When developers know they’re about to push compromised code into the codebase, without a chance to perform proper testing or clean up technical debt, there are policies and mandates they could leverage to buy a little breathing room and improve the quality of their work.
Is that usually the case? Maybe in some places. But in most… developers have some of the most defiant personalities I’ve ever encountered—at least when it comes to working with the InfoSec team. If the business throws an impossible request, it’s usually met with a “Sir, Yes Sir! Can I have another?”
But if InfoSec asks for a patch on an eight-year-old critical vulnerability… well, that’s a whole different story.
Am I overstating things? Maybe. I want to give developers the benefit of the doubt here—being objective is a trait I admire. But authenticity and transparency mean even more to me, and if I’m being real, there’s often a serious disconnect between what developers think is secure and what InfoSec is tasked with keeping secure.
I can’t tell you how many times I’ve had conversations with developers who admit—sometimes proudly—that they bypass security controls using personal devices or come up with some 007/Mr. Robot-style workarounds. They say things like, “The system can’t be secure if I’m able to do this.”
No sh*t, Sherlock.
Why do some developers gloat at finding a workaround instead of sharing it with InfoSec? Let’s be real—figuring out how to bypass a clunky control system can feel pretty satisfying. And sure, it’s not exactly the same as pulling off an Ocean’s Eleven heist, but it still feels good in the moment.
But here’s the thing: that little flex is more than just harmless fun—it’s a symptom of a deeper problem. When developers feel like their creativity is stifled and security is just another blocker, moments like these are bound to happen. And while the victory lap might be short, the consequences can be long-lasting—leaving the organization exposed to risks everyone will eventually have to scramble to fix.
Alright, maybe not every developer is out here plotting ways to get one over on InfoSec—but anyone who’s been around long enough has seen at least a few who enjoy the occasional workaround. The truth is, both sides have legitimate concerns. Developers need the freedom to deliver without constantly hitting roadblocks, and security needs to make sure those deliverables don’t open up new vulnerabilities.
If we could align priorities earlier—before developers feel forced into a corner—we might avoid those triple-bypass workarounds that eventually come back to haunt us all.
Let’s be friends.
Of course, I also know plenty of InfoSec professionals who probably shouldn’t be allowed anywhere near a security role. Their entire mission seems to be about locking everyone down, ensuring that every molecule of air a workforce member breathes is monitored and analyzed. These are the same folks who sleep like babies with 40 critical vulnerabilities on their domain controllers but will raise hell if a developer wants to write a unit test from their personal laptop.
This is all supposed to be risk-based work. Who owns the risk? How do they want to manage it—mitigate, accept, transfer, or avoid? What’s the impact? What’s the probability it could happen?
Often, neither the InfoSec team nor the developers technically own the risks. And that’s where a lot of the burnout, stress, and anxiety comes from. Reporting issues like technical debt, vulnerabilities, bugs, and threats shouldn’t be a source of stress—it should feel liberating. The business needs to establish the right governance to assign risk ownership. Once that’s in place, the teams responsible can make informed decisions on how to handle things. This makes it easier for everyone to work transparently, fluidly, and without compromise.
Easier said than done, I know.
And You Think the US Government Has a Debt Problem...
I can only imagine what’s going through a developer’s head when they join a new organization—full of excitement over the opportunities they were promised during the interview process—only to discover that the codebase is an absolute dumpster fire. A lot of them probably think, “Yep, sounds about right.”
Technical debt in the codebase is like ramming into an iceberg on a boat, and we all know how that story ends. Product teams and developers often resort to shortcuts or suboptimal workarounds just to get a product, feature, or critical fix out the door. It’s a rational, logical trade-off—as long as there’s a plan to go back and pay down that debt later.
But here’s the problem: organizational priorities rarely align with the idea of “going back to fix things.” Most of the time, anything that doesn’t directly impact the bottom line is tossed out the window. Quality? Out the window. Code hygiene? Gone. It becomes a careful balancing act, one where the goal is to keep piling on debt to maximize short-term gains… and then pile on even more debt to keep that train running. It’s like using a credit card to pay off another credit card and hoping no one notices.
From afar, it’s easy for outsiders—product managers, execs, and every casual bystander—to peer in and say, “Hey, why don’t you just do X and fix the issue?” But the people actually dealing with this bad code, these Band-Aid solutions, and the warped risk-reward structure have to juggle dozens of constraints that limit their ability to make meaningful improvements.
One thing seems clear, though: any real solution has to start at the top, with a firm commitment to code quality as a fundamental part of every deliverable. If “don’t let the debt pile up” became a company mantra, we might actually see progress. A system that rewards and incentivizes quality—not just in developers but across the whole business—could help prevent major sections of code from being abandoned just because it’s no longer feasible to fix, update, or even manage.
Instead, developers are dealing with more pressing issues that are tied directly to the almighty dollar, while potential bugs, unsupported libraries, and lurking vulnerabilities get grandfathered in as “the cost of doing business.”
In his article “How to Deal with Technical Debt,” Dr. Milan Milanović addresses this from a governance perspective and highlights several factors that define technical debt:
- Code Quality
- Testing
- Coupling
- Out-of-date Libraries and Tools, etc.
He also discusses different methods to track and measure this debt load, such as:
- Technical Debt Ratio
- Code Quality Metrics
- Defect Ratios and Lead Time.
For anyone looking to understand technical debt from a governance perspective, this article is a great starting point.
But in the meantime? Developers are left to navigate a codebase full of potholes and debt, and it’s often up to them to decide: do I fix it, or do I let it slide? That’s not a decision anyone should have to make alone, especially when the stakes are high, and the pressure’s relentless.
The Wrap: Broken Incentives
The reality is, someone could write an epic saga on the issues software developers and product management teams face in Corporate America. Developers aren’t exactly bending over backward to maintain pristine code if the companies paying them don’t even make it a priority. And can you blame them? It’s like asking someone to spend hours scrubbing the floors of a burning building. “Quality” becomes one of those words that sounds nice in theory, but, well, who has the time? Who has the resources? Who has the tools? Who has the leadership that will support it?
Here’s the real b&!**: business priorities and IT priorities are often about as aligned as a compass in a magnet factory. While IT teams think their mission is all about quality (ha!), some organizations are secretly (or not-so-secretly) thrilled at the prospect of churning out the crappiest product they can, as long as it saves time and boosts the bottom line. Lower-quality products, shorter dev cycles, fewer bug fixes—it all translates to higher margins and makes that quarterly report look great. Everything else is just… an abstraction. If code quality doesn’t threaten the brand (think outages, vulnerabilities, or the occasional public meltdown), why would some execs even care?
And here’s where it gets ugly. Without regulatory watchdogs breathing down their necks, forcing secure coding practices, code reviews, and a full orchestra of compliance measures, the state of most codebases would probably be... well, let’s just say “unhealthy.” Sure, that might sound obvious, but here’s a fun little thought experiment: if there were zero external pressures, where would the balance settle between cost-cutting shortcuts and any semblance of code hygiene? My money’s on “far, far from ideal.”
From a security perspective, understanding these incentives isn’t just helpful—it’s essential. Developers are trapped in a vicious cycle, playing a game of “Do I care, or don’t I care?” They’re given a codebase that’s like a ticking time bomb, with management whispering, “Don’t worry about the wires—just get it working!” If code quality isn’t a priority for the business, then maintaining it becomes a purely theoretical exercise in risk management. And that’s a horrible place to put a professional.
This leaves developers in an absurd spot, where they’re often forced to work with duct-taped, Frankenstein-like code that only becomes more unmanageable with each new sprint. Imagine being told, “Hey, we know the car barely drives, but just take it out on the freeway and see what happens.” It’s no wonder they’re less than eager to cozy up to InfoSec when they’re already navigating this minefield daily.
And sure, some developers decide it’s not their problem to worry about vulnerabilities or technical debt—that they’ve done their part by delivering on deadline. But this disconnect doesn’t come from laziness; it’s survival. If the business doesn’t prioritize quality, why should developers set themselves on fire to fix it?
The truth is, both sides are stuck in a system that rewards shortcuts. Developers need the freedom to deliver without constantly getting tripped up by “just one more security check,” and security teams need to ensure those deliverables don’t leave doors wide open for the next big breach.
So, where does the solution come from? Will it take a hard-hitting regulation? A wake-up call from a client? Or maybe, just maybe, an executive brave enough to lead the way out of this maze of quick fixes and into the light of sustainable, quality code. Because until that happens, developers and security pros alike are just paddling along in a very leaky boat.
Leave A Reply