The role of a software developer has grown through layers of abstraction, evolving from writing commands tied to specific line numbers to using frameworks that handle state management and make garbage collection an after thought. Yet, for much of the past 80 years, programming has been seen as a cryptic, computer-nerd language—an instant license to fix every relative’s computer. (And maybe your coworker’s son’s friend’s dad’s computer too, but that might just be me.)
Every decade or so, a new major abstraction emerges, fundamentally changing the way developers work:
- 1950s: Compiling to binary machine code with Assembly Language.
- 1960s: Introducing loops, conditionals, and functions with High-Level Languages.
- 1970s: Creating classes, objects, and inheritance with Object-Oriented Programming.
- 1980s: Connecting computers to share information through The Internet.
- 1990s: Enhancing visual experiences and file management with Integrated Development Environments (IDEs).
- 2000s: Reusing others’ code through Package Managers.
- 2010s: Providing generic, reusable starting points with Frameworks.
- 2020s: Building business-critical apps using interfaces with No-Code Platforms.
But the next big layer of abstraction might truly be paradigm-shifting, fundamentally altering how we perceive what a software developer types into their device. What if this seismic shift blurs the lines between the language we use to communicate and the language we use to code?
It’s me, AI, I’m the problem, it’s me. The next software developer will rely on natural language (prompts) as a new “software language.”
But here’s the thing: natural language is messy. Unlike programming languages, where every comma and semicolon serves a purpose, human language is ambiguous, inconsistent, and sometimes downright nonsensical. This creates challenges for AI systems, particularly large language models (LLMs), which are trained on massive swaths of human language. These models inherit biases and occasionally hallucinate information—problems that stem from the messy, historical nature of human language itself. In a way, you could say we have a bug in our human language compiler. (lolz)
Yet when LLM’s are trained on code, the output needs to be specific. A great developer will know whether or not it will compile/execute and ultimately function as intended. This means the ambiguity in AI generated code is not as forgiving. It must be accurate. And to be accurate, it will require explicits.
This is where the future professional developer can truly shine. Developers will shift from being syntax-driven to being more explicit-driven. The techniques used to prompt AI to write code will become a framework and skill in their own right.
The ability to craft clear, unambiguous prompts will become a superpower. “Prompt engineering” will require a better discipline in telling a machine more of what not to do than what to do. Great prompts will require the developer to truly consider how to handle edge cases and not just focus on the happy paths. It will change the way we implement business logic.
AI isn’t a career shortcut—it’s a transformation in how we innovate. Developers will focus on bigger problems, design smarter systems, and create better interfaces. If you’re a seasoned developer with 20+ years of experience and this frustrates you, don’t let it. None of this would be possible without you. These layers of abstraction happen because of the brilliant work you’ve done. We all stand on the backs of giants and we thank you for your contribution to solving meaningful problems. As you evolve in your career, lean in to what makes a great software experience and consider how AI will write your code.
This next level of abstraction will undoubtedly be a massive shift in how new software applications are created and launched.
The AI Software Developer will be one of the hottest careers of the next decade.
Comments are closed.