This text was generated using AI and might contain mistakes. Found a mistake? Edit at GitHub

Large Language Models (LLMs) are transforming how we approach software architecture, but the key lies in using them wisely. Claudine Allen, iSAQB board member and lecturer at the University of the West Indies, shares a compelling vision: LLMs work best as collaborative assistants rather than autonomous problem-solvers.

The Incremental Approach

Allen’s methodology emphasizes stepwise, incremental collaboration with AI. Instead of asking ChatGPT to solve an entire architectural problem at once, she recommends breaking the architecture into manageable components. Once primary structural blocks are defined, LLMs can help detail their interfaces, logic, and documentation—but always under human guidance.

This approach proves particularly valuable during the clarification phase. Requirements gathering, constraint analysis, and quality scenario definition become faster and more precise when LLMs assist with background research and vocabulary refinement. Allen notes that while architects might articulate what’s important in plain English, LLMs help translate this into the precise, measurable terms necessary for professional architecture work.

Novel Domains and Knowledge Boundaries

Allen’s research into sign language translation exemplifies how LLMs excel in unfamiliar domains. Though sign language was outside her expertise, LLMs quickly identified relevant concepts — like “glossing,” the intermediate translation layer between natural language and sign language — and pointed toward foundational research papers.

However, LLMs perform optimally with well-documented topics. Established methodologies like arc42 and the 4+1 views model appear consistently in training data, making them reliable sources for architectural guidance. Emerging or highly specialized domains present different challenges, where LLM recommendations require careful validation.

Redefining Educational Practice

Allen’s teaching philosophy extends beyond technology. She recognized that students will enter industries where AI tools are ubiquitous, so deliberately incorporating LLMs into coursework prepares them for professional reality. Rather than banning AI, she designed assignments requiring transparency about tool usage and demanding that students understand, explain, and defend their work.

This approach mitigates cheating concerns. While LLMs might solve mathematical homework, architectural work requires contextual judgment, experimentation, and presentation skills that algorithms cannot replicate. By focusing on learning processes rather than mere answers, Allen ensures students develop critical thinking alongside practical competence.

Emerging Possibilities

Allen’s experiments with Mermaid diagram generation and AI-assisted code generation hint at architectural work’s future. By working at higher abstraction levels — exchanging structured code rather than images — architects can leverage LLM capabilities more effectively while reducing token waste and improving clarity.

Conclusion

Claudine Allen demonstrates that LLMs succeed in software architecture when treated as intelligent research assistants, not replacements for architectural judgment. Her incremental, transparent, and educationally - focused approach transforms AI from a potential threat into a powerful tool for deeper learning and faster problem-solving. The future of software architecture isn’t about replacing architects with AI — it’s about augmenting human expertise through collaborative, responsible AI use.