With the recent release of GPT-4, many people are voicing concern about the impact of AI on the job market. A Goldman Sachs report concluded that up to two-thirds of white-collar jobs are open to partial automation through AI. They argue that this is overall good news for global productivity.
Programming is forecast to be heavily impacted and many programmers are wondering what their future will look like. Will there be productive utopia, or otherwise? While many are hopeful of productivity benefits, I think there are potentially hidden limitations.
One of the first use-cases is code completion, where an AI-powered IDE plugin generates code faster and perhaps more accurately than any human. Code completion based on LLMs works by "learning" from public resources such as the world-wide-web, Stack Overflow, and open source libraries in public repositories like Github. This public corpus of knowledge becomes the model for producing statistically generated code. AI researcher Emily Bender and her colleagues have cautioned against this model becoming a "Stochastic Parrot".
Code Completion is Not Enough
Getting to complied code is just a small part of the software engineering process. "Out of the box" AI code completion, no matter how sophisticated, lacks the necessary context required to build scalable, reliable, and maintainable applications within your specific programming environment. Code completion from publicly trained models is missing important ingredients.
Architectural context represents the general concerns that make software maintainable and evolvable over the long term. These concerns, such as modularity—the overall shape, size, and composition of your code building blocks—are unique to our organisation. Well-designed code modules and interfaces are key to reusability and composability of your code, which is essential for building large-scale applications from smaller elements.
Organisational context represents the practices and obligations specific to your organisation. Concerns such as coding standards and patterns, regulatory and security standards. Such compliance must be built into your code from the ground up.
It's crucial that LLMs learn from your own documentation, codebase, and API specifications to ensure that generated code supports your architectural requirements and unique organisational context.
The immediate benefits of AI will be rapid generation of code that compiles. However, without proper care for these concerns, there's a risk of generating spaghetti code that becomes brittle and unmanageable. This is already the case in many large organisations, even with natural intelligence. Automating spaghetti code will ultimately be counter-productive.
Applying AI to Higher Order Concerns
The second-order benefits of AI will flow from an ability to design more efficiently. If we let LLMs learn from our own architectural and organisational context, AI could generate more manageable and evolvable code. For example, AI could provide more efficient discovery and guidance on the use of reusable assets, such as internal code modules and APIs.
The biggest opportunity of AI is that coders will be freed up to spend more of their valuable time on higher-order concerns, such as the strategic direction of the software or the shipping of new features, instead of just unlocking more KLOCs.
AI is starting to change the way we write code, but it's essential that it supports our unique architectural requirements and organisational context. Code completion must support the ways that we manage and evolve our code for business productivity. AI can hinder or can help, depending on the choices we make.