Why Enterprise Architecture Deliverables Go Unused
And how to make architecture artifacts actually support decisions
Enterprise architecture (EA) functions produce a wide range of deliverables: capability maps, application landscapes, target state descriptions, roadmaps, principles, and sometimes data models, process flows, reference architectures, or various technical blueprints. Many of these are well thought through, internally consistent, and aligned with established frameworks.
And yet, a familiar situation often emerges.
The deliverables exist, but they are not actively used.
They may be referenced occasionally, but they do not consistently shape decisions, prioritization, or design choices. Over time, the organization may even forget that certain artifacts exist at all.
In earlier posts, I have discussed how architects in their roles can help improve the use of architecture in decision-making. In this article, the focus is slightly different: the structural characteristics of the deliverables themselves. What makes some architecture artifacts more likely to be used than others?
Challenges That Limit The Use Of Enterprise Architecture Descriptions
Below are some recurring structural reasons why architecture deliverables remain underutilized, together with practical ways to improve their usability.
No Clear Use Case Or Decision Context
A common structural challenge is that the deliverable does not have a clearly defined use case, user, or decision context.
Architecture artifacts often describe a well-structured current landscape, but it is not clear who is expected to use the material or in which situation. A target architecture may exist, but no investment decisions explicitly refer to it. A capability map may describe the landscape, but it is not used when prioritizing initiatives. Principles may be documented, but not actively applied in project or steering discussions.
Decisions are the situations where architecture becomes actionable. When it is unclear which decisions a deliverable is meant to support—and who is expected to use it—the artifact can remain conceptually sound but practically distant.
Usability often improves when the intended use context is made explicit and embedded into existing methods and decision processes. For example, a capability map can be used as a standard input in prioritization discussions, a target architecture as a reference point in investment planning, or principles as part of solution design reviews and governance checkpoints. When the user, decision situation, and practical touchpoints are clear, the deliverable is more likely to be used consistently rather than occasionally.
Too Generic To Guide Decisions
Generic guidance is often easy to agree on but difficult to apply in practice.
Principles such as “reuse before build”, “prefer standard solutions”, or “ensure scalability” are broadly sensible, yet they rarely resolve concrete trade-offs. In real situations, teams need to balance cost, speed, technical constraints, existing contracts, skill availability, and delivery timelines. General guidance alone does not indicate what should be done in a specific situation.
A similar challenge appears with overly generic architectural descriptions. Reference architectures may describe broadly accepted patterns but remain too abstract to guide actual design choices. The same applies to current and future state descriptions such as capability maps, application landscapes, or data models that operate at such a high level that the connection to real operational context remains unclear. As a result, the same material may be interpreted differently across stakeholders, or bypassed altogether when time pressure increases.
Usability often improves when the description remains simple but becomes more concrete. A principle can become more actionable when supported by examples from recent initiatives, clarification of typical trade-offs, or illustration of how the guidance applies in common design situations. Similarly, a model can remain high-level while still connecting architectural elements to recognizable business concepts such as products, customers, transactions, or operational processes.
When architecture deliverables remain understandable but relate clearly to real-world constructs, they are more likely to support consistent decisions rather than remain abstract representations.
Unclear Reliability or Quality Issues
Architecture deliverables are less likely to be used when their reliability is uncertain or quality is poor.
Common issues include descriptions that are not up to date, unclear version status, or uncertainty about what exactly is being described. For example, it may not be obvious whether a diagram represents current state, a planned change, or a conceptual example. In some cases, the information may also be partially incorrect or incomplete.
Another typical situation is that the deliverable does not align well with other architecture views. The same concept may be described differently across diagrams, or relationships between elements may not be consistent. Inconsistent or unclear use of notation can further reduce confidence, especially when similar symbols appear to represent different meanings in different diagrams. Unnecessarily complex models can amplify these issues, making it harder to verify consistency and maintain reliability over time.
When users are unsure whether the content is accurate, consistent, or still valid, or find it difficult to understand, they may hesitate to rely on it in decision situations.
Usability often improves when the scope, status, and level of completeness are made explicit. Indicating whether a description represents current state, target state, or work in progress can reduce uncertainty. Quality can also be strengthened by keeping key views reasonably up to date, using notation consistently, ensuring that related deliverables describe the architecture in a coherent way, and avoiding unnecessary complexity that makes models harder to maintain and trust.
Architecture descriptions do not need to be perfect in order to be useful, but they need to be sufficiently reliable and understandable that decision-makers feel comfortable using them as part of their reasoning.
The Level Of Detail Does Not Match The Situation
Architecture operates at multiple levels of detail, but the usefulness of a deliverable depends on whether the level matches the situation and the needs of the user.
If the model is too high level, users may struggle to translate it into concrete design implications. If the model is too detailed, decision-makers may not see the structural relevance or may find the material unnecessarily heavy for the question at hand.
The same deliverable may be useful for one role but not for another. For example, a high-level system landscape can help a CIO understand application portfolio complexity, identify consolidation opportunities, or visualize cost distribution—especially when the visualization includes attributes such as lifecycle status, ownership, criticality, cost, or technical risk. For a solution designer working on a specific implementation decision, the same diagram may offer limited guidance.
Usability often improves when the level of detail is aligned with the decision context. Coarse-grained views can support prioritization and communication, while more detailed views can support solution design and implementation planning. At the same time, the relationships between different levels of detail need to remain reasonably consistent, so that detailed views can be understood in relation to broader structures.
Produced At The Wrong Time or Not Available When Needed
Timing has a strong influence on usability. A common challenge is that architecture deliverables are either produced too early or not available when decisions actually need to be made.
Sometimes architecture work happens before the surrounding decision context is sufficiently clear. The deliverable may reflect reasonable assumptions at the time, but those assumptions evolve as the initiative progresses. The artifact can then feel outdated before it has had a real opportunity to be used.
In other situations, the opposite occurs: the material would be useful, but it is not available when needed. Or it technically exists, but is difficult to find when needed. Teams then reconstruct their own view of the current state, dependencies, or constraints repeatedly, often under time pressure. Similar analyses may be recreated multiple times across initiatives because shared material is missing or difficult to locate.
Many architecture deliverables are most useful when the decision space is sufficiently defined, but still flexible enough to influence outcomes. At that point, key questions are becoming clearer, but structural choices have not yet been locked in.
Usability often improves when architecture work is more closely aligned with the planning, prioritization, and solution design cycles. Instead of producing material far in advance, some artifacts may be more effective when developed iteratively alongside initiatives, allowing assumptions to evolve together with the decision context.
At the same time, maintaining a lightweight high-level current state description is often worthwhile. In sufficiently mature environments, there is continuous need for a shared overview of capabilities, applications, and key dependencies. Even a coarse but reasonably up-to-date baseline can significantly reduce repeated discovery work across initiatives.
No Clear Ownership
Architecture deliverables often exist in shared repositories, but responsibility for maintaining and applying them can remain unclear.
When ownership is diffuse, artifacts can gradually drift out of date. Teams may not know whether the material still reflects current situation or whether it has been superseded.
Ownership is not only about maintaining the content itself. It also involves supporting its use in practice. If no one is responsible for connecting the deliverable to ongoing initiatives, design discussions, or planning processes, its practical relevance can weaken over time.
Architecture material tends to remain useful when someone actively keeps it connected to real change. This can include updating key views when the landscape evolves, clarifying how the material should be interpreted in specific contexts, and ensuring that relevant stakeholders are aware of its existence.
Clear ownership helps ensure that deliverables remain sufficiently current, understandable, and usable. It also creates continuity in how architectural guidance is applied across initiatives, reducing the likelihood that each team interprets the material independently.
Closing Thoughts: From Deliverables To Decision Support
EA does not necessarily need more deliverables. In many situations, a smaller number of artifacts that are clearly connected to decision contexts can create more impact than a large collection of loosely connected materials.
The goal is not to produce documents, but to support decisions.
A simple question can often help clarify direction before creating a new artifact:
Which decision does this help us make?
When the connection to decisions is clear, architecture deliverables tend to remain useful for longer. They become part of how the organization reasons about change, rather than static descriptions stored for reference.
When architecture artifacts function as shared tools for structuring discussions, clarifying trade-offs, and supporting consistent choices, they tend to stay relevant.
And when they stay relevant, they tend to be used.
📘 New Book: The Senior Expert Pay Playbook
If you work in enterprise architecture or another senior expert role, you have probably noticed that compensation does not always follow effort, competence, or even impact in a straightforward way.
I recently wrote a short book, The Senior Expert Pay Playbook, which looks at compensation from a structural perspective: how positioning, visibility of value, and proximity to important decisions influence long-term earning development.
In a way, the book applies architectural thinking to compensation. How does value become legible inside organizations? Which roles are easier to connect to economic outcomes? Why do some expert positions develop more leverage over time than others?
Many of the patterns are familiar to enterprise architects.
The book includes my own 20-year salary trajectory together with structural analysis of what influenced the progression.
The launch offer is still available:
PAYSTRUCTURE22 – The Senior Expert Pay Playbook
EXPERTMODEL22 – bundle with The Senior Expert Career Playbook
Available via Gumroad.
If you are interested in how structural positioning affects both influence and compensation, the perspective should feel quite natural.
🔗 You May Also Like
Looking to dive deeper? Here are more enterprise architecture insights you might find useful:
👨💻 About the Author
Eetu Niemi is an enterprise architect, consultant, and author.
Follow him elsewhere: Homepage | LinkedIn | Substack (consulting) | Medium (writing) | Homepage (FI) | Facebook | Instagram
Books: Enterprise Architecture | The Senior Expert Career Playbook | The Senior Expert Pay Playbook | Technology Consultant Fast Track | Successful Technology Consulting | Kokonaisarkkitehtuuri (FI) | Pohjoisen tie (FI) | Little Cthulhu’s Breakfast Time
Web resources: Enterprise Architecture Info Package (FI)
📬 Want More Practical Enterprise Architecture Content?
Subscribe to Enterprise Architecture Transformation for real-world advice on architecture that supports change, strategy, and delivery.





