The Most Valuable Skill I’ve Seen Engineers Lose, and Why It Matters Now

This time of year has a way of slowing things down.

As winter settles in and we all spend a little more time with family and friends, there’s often a moment when the noise quiets just enough to think. For most people, I’m sure it’s about the magic of the holidays, time with family and friends, or maybe even a warm-weather vacation. For me, my head wanders to my work. Not the next sprint or the next feature, but what actually matters over the long haul.

This year, that reflection keeps coming back to a simple question:

What really makes a great engineer?

After decades in this industry, I’ve seen tools change, paradigms shift, and entire job descriptions come and go. But I’ve also watched one critical skill slowly fade into the background.

And right now, that skill matters more than ever.

It Isn’t Coding Speed or Tool Knowledge

The most valuable skill I’ve seen engineers lose isn’t typing speed, language fluency, or familiarity with the latest framework.

It’s not PR velocity or anything measured by DORA metrics. It’s definitely not who has the deepest front-end framework expertise.

All of those things are valuable. But something else is more important.

It’s the ability to reason through ambiguity.

When I was coming up, we didn’t have the luxury of abstraction layers everywhere. If something didn’t work, you traced it. You reasoned about it. You figured out why.

I’ve mentioned before that I used to test on algorithms in my hiring assessments. They mattered. Not because engineers would be implementing them every day, but because algorithms expose reasoning, tradeoffs, and comfort with uncertainty.

The final part of my assessment was a four-question story titled “One Bad Day.” In it, engineers were faced with real-world problems and incomplete information. There were no right or wrong answers. What mattered was how they handled uncertainty when confronted with it.

Those questions revealed how someone thinks when there isn’t a clear path forward.

As software evolved, we got very good at assembling systems. We got much less comfortable sitting with ambiguity.

Abstraction Is Powerful, But It Has a Cost

Modern tools are incredible. They let us build faster and bigger than ever before. But they also hide complexity, and when complexity is hidden long enough, people forget it exists.

That’s how we end up with engineers who are productive, but uneasy the moment something doesn’t behave as expected. When the happy path breaks, the thinking muscle hasn’t been exercised.

AI accelerates this trend if we’re not careful.

Why This Skill Matters More Now, Not Less

There’s a fear that AI will do the thinking for us.

I believe the opposite.

AI is very good at producing output. It’s much worse at knowing when that output is wrong, incomplete, or misaligned with intent. That gap is where human reasoning becomes invaluable.

The real present in this new era isn’t faster code generation. It’s the opportunity to refocus engineers on judgment, evaluation, and problem framing.

Those are learned skills. They compound over time. And they don’t disappear when tools change.

The Gift That Actually Lasts

As you head into the end of the year, maybe while you’re opening presents or just enjoying a quieter moment, this is the thing worth investing in.

The greatest gift you can give yourself as an engineer isn’t another tool or certification. It’s the willingness to slow down, sit with ambiguity, and reason your way through it.

That skill never goes out of style.

From Monochrome to Machine Learning: Reflecting on a Lifetime in Tech Before the Next Leap

The first computer most people encounter today fits in their pocket.
Mine filled most of a desk. Actually, a desk and a printer cabinet next to it.

My very first computer experiences were on our school’s Apple I machines, playing Oregon Trail on either black and white TVs or bright green monochrome monitors. Later at home, in the late 1980s, we got an Epson Equity I+ with a 10 MHz CPU, 640K of RAM, a 10 MB hard drive, a 5.25 inch floppy drive, and an EGA monitor that felt like stepping into full color after years of monochrome. If you lived through that transition, you remember how magical it felt. Not just more pixels, but more possibility.

Since then I have watched wave after wave of transformation reshape software and the world around it.

  • The CPU evolution, from the simplicity of that first 10 MHz machine to modern processing stacks capable of teraflops.
  • Gaming advances from Pac-Man at the arcade or on Atari at home, to Super Mario on Nintendo, from 8 bit to 16 bit, to 64 bit, and eventually to fully immersive 3D experiences.
  • The rise of the internet, starting with dial up BBS systems and rapidly becoming the global information network we rely on today.
  • Communication shifting from letters and postcards to electronic messaging, long before anyone imagined what smartphones would become.
  • Social platforms growing from AOL chatrooms and small forums to worldwide communities where billions share ideas.
  • The mobile phone evolving from a clunky brick attached to a briefcase to a pocket sized computer that dwarfs the power of my first PC.
  • Music and movies becoming instantly accessible. No more waiting for release days or storing shelves full of physical media.
  • Breakthroughs in biology that finally let us map DNA and understand life in new ways.
  • Automation taking form in 3D printing, robotics, and smart home technology.
  • Machine learning taking root, followed by modern generative AI that can now write, reason, and assist in ways we only imagined in science fiction.

And these are just the highlights. After 25 years in the industry and a lifetime of tinkering, the list is a long one.

What is interesting about right now, especially with 2026 on the horizon, is that it feels like another one of those rare pivot points. Except this time the shift is not about faster processors or prettier screens. It is about the relationship between humans and software fundamentally changing.

We are entering an era where tools are not just tools. They are partners. They help us think, reason, and build. They fill gaps. They accelerate ideas. They lift creativity instead of replacing it.

For the first time in my career, I am building something that sits directly in the center of that shift. Something that focuses on how to evolve the bigger picture rather than profit from it. Something designed with everyone in mind, not tied to advancing any particular agenda.

I will be announcing something big very soon for software development. At least, I think it is big. For now, I will simply say this:

Everything I have learned since that Epson Equity, from the GW-BASIC and MS-DOS 3.2 manuals, through distributed computing, microservices, and machine learning, all the way to modern AI, has been quietly converging into a project I have been shaping for a very long time. I just did not realize it until recently.

It is close now.

And once it is ready, I believe it will change how a lot of people think about building software, and possibly grow into something far larger than that.

Congratulations on Your Failure

On my desk I have a placard that says:

Failure is an opportunity to intelligently begin again

This is one of the principles I have built my career on. I’m certain that I have failed and lived tell about it many more times than I have celebrated sweet success.

First, it’s important to recognize that failure is okay as long as you learn something from it. Failing without learning is like paying for dinner at a fine restaurant and sending your plate back to the kitchen without eating. It’s expensive, and you miss out on the good stuff. The truth is that failing at anything can teach you lessons you just can’t get in any other way.

There is a mantra in start up companies to fail fast and fail often. This iterative idea is prevalent in business today. Agile methodology reinforces this concept and it has proven, that when applied correctly, it will get a business not only from point A to point B faster, but often point B is not the original point B envisioned, but an exponentially better one. Gradual iterations provide for more opportunities to fail and therefore more opportunities to learn, solve new problems, confirm or deny assumptions, and improve.

I have always encouraged my employees to try things and see if they work. I teach them to embrace failure and avoid paralysis by analysis.  They should not fear the unknown, but rather be excited to the possibilities that lie ahead. These concepts are not in our nature as human beings. We have primal instincts to survive and we are in conflict with this by embracing failure.

Learning from failure is such a powerful tool for us in IT. We have such a unique landscape of tools and technologies. We build and can rebuild using Agile principles and be more agile and swift than other industries. It is imperative we indeed fail, learn from failure and repeat until we get a desired outcome. I promise, if you focus on this moment in time, this single problem, and actually learn as you go, the end result will be better than you could have ever imagined in the first place.

Computing is Problem Solving

I started out my career as a software developer, but well before that I learned some valuable lessons about what it would take to build a career in computers.

When I was 10 years old my grandmother passed away. She didn’t have a lot of money, but she loved her grandson enough to leave him two gifts. First, she left enough money for me to get to go to Disney World, because all kids should get to experience that place. Second, she left me the money to get a home computer. At that time, in the 1980’s, home computing wasn’t that common, but becoming more and more common every day.

My first computer was an Epson Equity I+ with a 5.25″ floppy disk drive, an EGA monitor, and a Panasonic dot matrix printer.  I am still kicking myself for throwing it out.  I have fond memories of my time learning on that computer.

When we got the computer unboxed and setup in a spare bedroom at our house, I remember the anticipation of what I would see. My computing experience before this was a TI-99 that my cousins and I got one year in the early 80’s that I promptly disassembled, the Atari 2600 that a friend had and the Nintendo Entertainment System that I schooled everyone on as a kid. Maybe I didn’t really school everyone on the NES, but let me have my moment. Back to my first boot. After hitting the power button, I was expecting some graphics, music, a game perhaps. What did I see?  I saw the screen flip numbers up to say 640K of RAM and then some grinding sounds and a chirp, then finally a blinking cursor at a C:\ prompt.  I didn’t know what that meant, but I knew I had work to do.

Along with my home computing system were three fat books on spiral binders. The first was how to hook up the actual hardware and information about the hardware itself. I threw that to the side, because we fumbled through that already. The second was a book about MS DOS 3.0. I thought to myself, “This might come in handy,” and set it in front of me. Finally, there was a book on programming my new computer with GW BASIC. I thought something to the effect of, “This is it!  I can actually build video games!”

I spent the next four years or so hammering away at that computer. As friends got 286, 386 and 486 computers. I still had fun making things work on my 8088 processor architecture. I built scripts to make home computing easier, and to display boot menus and such. I tinkered with simple game design. I taught the computer to make noises, play sounds and even stumble through some bad computer beep music. I built ASCII character graphics into art and learned a drawing library that taught me how to draw circles, squares, rectangles, lines and more in all 64 colors my EGA monitor could display. I built routines to print amazing HAPPY BIRTHDAY banners in many varieties of fonts, including some homegrown fonts. I acquired a modem and made my computer talk over the telephone line to my cousin’s computer just because the anticipation of waiting on the response to “Hello” was so darn thrilling. I remember getting a mouse and installing the drivers and building my first program that could accept a click.

My first computer taught me the most important lesson in my career. It isn’t the hardware you have or don’t have. It isn’t the computer programming language you choose. It isn’t the skills you possess now or the mountains of practice and research to possess new ones. Computing is simply about a can-do attitude and a relentless desire to solve the problem at hand.

As IT professionals, we are tasked with making the impossible become possible. We are tasked with building something that has never been built before. This is often on a budget and equipment and tools that are lacking in more ways than the blinking C:\ prompt I saw when I booted my first PC.

Arthur C. Clarke created three laws in the essay “Hazards of Prophecy: The Failure of Imagination”:

  • When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  • The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  • Any sufficiently advanced technology is indistinguishable from magic.

I agree, especially with the last one. I think the way I describe it is that as an IT professional it is our job and our privilege to turn fantasy into reality, fiction to fact, nothing to something, and solve problems with unbounded enthusiasm and reckless abandon.