Reading summaries - week twelve, Spring 2018
Class themes were Internal Capabilities - Evaluating Effectiveness in MGNPO and IT Governance in IDSC
Table of contents
MGNPO
- Chapter 13, Managing Nonprofit Organizations, Tschirhart and Bielefeld (2012
- Ebrahim et al. (2014) What Impact A framework for measuring the Scale and Scope of Social Performance ? * Evaluation for the way we work, Patton (2006)
- Evaluation Flash Cards, Patton (2014)
IDSC
- A Matrixed Approach to Designing IT Governance, Weill & Ross (2005)
- Forget Strategy: Focus IT on your Operating Model, Ross (2005)
- Building IT Infrastructure for Strategic Agility, Weill, Subramani, & Broadbent (2002)
- Building Enterprise Alignment: A Case Study, Fonstad & Subramani (2009)
- Teaming Up to Crack Innovation & Enterprise Integration, Cash, Earl, & Morison (2008)
MGNPO
Chapter 13, Managing Nonprofit Organizations, Tschirhart and Bielefeld (2012)
- Nonprofits are facing increasing demands that they demonstrate their impact. Foundations and philanthropists are seeking to be more strategic as they seek social impact, government agencies are requiring information on program results, and board members want information on organizational activities.
- In its most general sense, accountability refers to an obligation or willingness to accept responsibility or to account for one’s actions
- Following from this, organizational accountability is usually defined as an organization being answerable to someone or something outside itself and accepting responsibility for activities and disclosing them.
- Accountability involves four core components:
- Transparency
- Answerability or justification
- Compliance
- Enforcement
- In practice, accountability involves three fundamental questions
- To whom is the organization accountable?
- For what is the organization accountable?
- By what means can the organization be accountable?
- nonprofits are expected to be accountable to multiple actors, including upward to funders and patrons, downward to clients, and internally to themselves and their mission
- In addition, nonprofits are accountable for finances, governance, performance, and mission
- accountability mechanisms include disclosure statements and reports, evaluation and performance assessment, self-regulation, participation, and adaptive learning
- defining the basic elements of programs
- Inputs include those to be served by the program as well as the resources needed for program activities
- Program activities are the steps an organization takes to bring about the intended program results
- Outputs are the most direct consequences of program activitie
- Outcomes are the short-term and intermediate changes that occur in program participants as a result of the program activities.
- Impacts are the broader changes that program outcomes are designed to bring about
- program evaluation process should proceed through a series of logically related steps
- To build evaluation capacity, an organization needs to “continuously create and sustain overall organizational processes that make quality evaluation and its use routine.”
- three elements of evaluation capacity:
- Resources devoted to evaluation
- Structures conducive to evaluation
- Organizational context supportive of evaluation
- program evaluation can:
- Inform planning decisions
- Determine whether more detailed, full-blown evaluations are needed
- Track program progress
- Determine whether a program has accomplished its goals (outcome evaluation)
- Determine whether a program has been implemented as planned (process evaluation)
- Measure the effectiveness or efficiency of units or practices
- Determine whether there are unintended consequences
- Assess the degree of stakeholder satisfaction
- Compare programs or approaches to determine which might be best in a new setting
- For any evaluation, however, stakeholders may have diverse agendas and favor different performance criteria and indicators
- Those responsible for the future of the programs must be included in the evaluation process and must believe that the process will lead to positive results
- Crucial to clearly establish the specific goals and eventual use of the evaluation
- The best evaluation goals and designs can be undermined by inadequate resources or time
- Once the purpose and goals of the evaluation have been established, a number of options are available for the shape the evaluation will take
- Two general evaluation approaches have been distinguished:
- objective scientist approach
- This approach is based on the natural science model of research, a key feature is the quest for objectivity
- distance between the evaluator and the program
- outside evaluators are used
- quantitative data are gathered
- goal of the evaluation is to assess whether program goals were accomplished
- internal workings of the program are not considered
- evaluation is performed at the end of the program
- this evaluation has a summative purpose
- model it follows is described as an outcome evaluation process
- based on the idea that program goals were clear and could be objectively measured and that data on goal accomplishment was sufficient
- utilization-focused evaluation
- In this approach the goal is a more comprehensive evaluation that includes the insights of program staff and the details of program operation
- program staff have intimate knowledge of how a program actually operates
- can identify problems in cause-and-effect relationships in program logic
- fine-grained qualitative data are used in addition to objective data
- designed to produce the knowledge that will allow staff to modify a program to enhance its outcome and impact
- knowledge may be used before the program ends.
- this evaluation has a formative purpose
- the model it follows is described as a process evaluation process
- objective scientist approach
- Programs are essentially tests of ideas about making something happen
- Social programs, in turn, are based on theories of change about how modification of behavior or social impact can be produced
- theories of change specify relationships between causes and effects and serve as the foundation for determining program activities and outcomes
- Social behavior is complex, and the best way to bring about changes in behavior may often be unclear or contested.
- How does this relate to program evaluation? Outcome evaluation can tell the nonprofit whether its program outcomes are as desired.
- When desired program outcomes are not obtained, however, outcome evaluation alone will not help a nonprofit to distinguish between two possible causes for the failure
- theory failure - program’s theory of change is ineffective
- program administration failure - program fails due to poor execution or budget shortfall
- Our discussion points to the need to base programs on well-articulated theories of change
- Theories of change can be developed through a series of steps:
- begins with identifying assumptions and outcomes
- backward mapping and connecting outcomes
- Backward mapping involves looking at the desired outcomes and specifying the antecedents (program steps) needed to produce them
- Finally, the mapping is displayed in a logic model
- The logic model should make clear the assumptions, interventions, and other conditions associated with producing the outcomes
- the logic model can be used to specify antecedent and mediating variables
- Logic models make the theory of change and the basis for producing impacts clear and provide a sound basis for program evaluation
- Specifying measurable goals is a crucial activity in program evaluation—it links the logic model and the evaluation process
- Outcome and impact goals serve as the basis for outcome evaluation
- Activity goals serve as the basis for internal program activities and process evaluation
- Bridging goals are defined as falling between activity and outcome goals
- Bridging goals are the links between program activities and outcomes and hence are at the heart of the theory of change the program is seeking to embody
- To demonstrate that the program’s theory of change is appropriate, it is necessary to show that the bridging goals have been accomplished and have led to the outcome
- Program evaluation data collection and analysis is based on the various research methodologies that have been developed in the social sciences, including psychology, sociology, and economics
- as one moves from inputs to impacts, one tends to go further out in time, away from the center of the organization, down in degree of control, down in measurability, up in abstraction, and down in the degree to which one can confidently attribute causation
- The question becomes, how far “outside” the organization should data routinely be collected on the results of the organization’s activities (that is, what is the normal data horizon)?
- The data used in evaluations may come from a variety of sources. Program administration requires the keeping of records on program inputs, activities, and outputs. Statistics based on this information are also routinely computed.
- Data on outcomes may be periodically or sporadically gathered as part of other organizational activities, such as strategic planning or marketing
- Information may be obtained directly from service recipients. In some cases, client records may be available from other sources.
- typically, client perceptions about the services they received and their satisfaction with these services are assessed through surveys.
- several qualitative data collection techniques are available, primarily observation and in-depth interviewing. These techniques are usually carried out by trained experts.
- Outcome evaluation requires data on the degree to which program outcome goals were realized
- A variety of data collection designs are available. They all compare program outcomes with what would have happened without the program
- Organizations with limited resources or expertise may not be able to carry out the more sophisticated designs
- To show that program activity caused behavioral change, it is necessary to show:
- that the behavioral change varied in tandem with the program activity (covariation)
- that the program occurred before the behavior changed (proper time order)
- that the behavioral change was not caused by any other factors (nonspuriousness)
- designs for the collection of data for process evaluations
- Case studies
- Focus groups
- Ethnography
- Data analysis is a highly specialized field, and if the analysis is to go beyond simple descriptions and breakouts, specially trained staff or outside expertise is likely to be needed.
- For any stakeholder group, it is important to provide neither too little nor too much information. Too little information will leave questions unanswered and frustrate decision making. Too much or overly complex information will confuse and possibly mislead those getting it
- Many community-based nonprofits lack the will, expertise, and resources for the kind of large-scale, substantially funded evaluations conducted by experts that occur in the government context.
- In the nonprofit context, evaluations demanded by stakeholders can be seen as intrusive diversions from the real work of the organization.
- In spite of the challenges, most nonprofits are concerned about evaluating their programs.
- Evaluation results were used for strategic management as well as for external reporting or program promotion.
- program evaluation provides practical benefits to nonprofits. Evaluation can help to provide feedback and direction to staff, focus boards on program issues, identify service units or participant groups that need attention, compare alternative service delivery strategies, identify partners for collaborations, allocate resources, recruit volunteers and attract customers, set targets for future performance, track program effectiveness over time, increase funding, and enhance a public image
- Evaluations can become embroiled in political tensions
- A nonprofit’s stakeholders, including the board, staff, volunteers, consumers, funders, community leaders, and regulators, may have different, and possibly competing, views on the desirability, goals, and techniques of program evaluation.
- Nonprofit leaders must balance these multiple viewpoints when making decisions about program evaluation
- to make program evaluation the basis of organizational improvement and a cornerstone of organizational learning, leaders must motivate and mobilize internal stakeholders
- Leaders should communicate that evaluation is an effective tool for helping the organization accomplish its mission and should provide resources for evaluation capacity building
- demands from external stakeholders must also be addressed
- key internal challenges to successfully organizing evaluations in nonprofits:
- Involve leaders of the organization
- Establish a high level of trust between funders and the nonprofit
- Clarify roles, responsibilities, and expectations
- Allocate sufficient time for technical assistance that might be needed
- Involve a broad range of staff
- Ensure that staff can devote sufficient time
- Funders have played a key role in the spread of nonprofit outcome evaluation
- nonprofits can work with funders to establish a more collaborative evaluation environment
- funders should be encouraged to view their role as helping agencies to develop evaluations that will provide the most useful information
- Funders serve their own best interests by helping agencies to develop evaluation capacity
- Local funders can collaborate with each other to support agency evaluation efforts
- When funders make outcome data a reporting requirement, they should drop other requirements that do not relate to this focus
Ebrahim et al. (2014) What Impact A framework for measuring the Scale and Scope of Social Performance
- Organizations with social missions, such as nonprofits and social enterprises, are under growing pressure to demonstrate their impacts
- Not all organizations should measure their long-term impact
- Some organizations would be better off measuring shorter-term outputs or individual outcomes.
- Funders such as foundations and impact investors are better positioned to measure systemic impact
- nonprofit organizations, philanthropy, and social enterprise has been preoccupied with two powerful mantras in recent year
- Since the early 1990s, the refrain of “accountability”
- A more recent manifestation of this discourse has centered on the mantra of “impact”
- attention to impact, following on the heels of accountability, is mainly driven by funders who want to know whether their funds are making a difference
- also driven by an increasing professionalization of the sector
- it is not feasible, or even desirable, for all organizations to develop metrics at all levels of a results chain
- more important challenge is one of alignment: designing metrics and measurement systems to support the achievement of well-defined mission objectives
- Often, it is the funder, who sits at a higher level in the social sector ecosystem, that will have a broader and more integrative perspective on how the work of several implementing organizations fits together to advance systemic goals
- Much of the literature on the topic of performance in the social sector is under-theorized and in need of conceptual framing
- The most widely advocated set of approaches to social performance measurement involve an assessment of impacts or results, which are broadly labeled as “impact evaluation” and “outcome measurement.”
- The term “impact” has become part of the everyday lexicon of social sector funders in recent years, with frequent references to “high-impact nonprofits” or “impact philanthropy” and “impact on steroids
- we distinguish between outcomes and impacts, with the former referring to lasting changes in the lives of individuals and the latter to lasting results achieved
- Many frameworks for measuring social performance employ a “results chain” or “logic model”
- Funding organizations—from philanthropic foundations and governmental agencies to impact investors—increasingly expect the organizations they support to measure their outcomes and impacts.
- evidence on whether outcome measurement has led to improved performance is mixed
- study of thirty leading U.S. nonprofits found that measurement was useful to the organizations for improving outcomes, particularly when they:
- set measurable goals linked to mission
- kept measures simple and easy to communicate
- selected measures that created a culture of accountability and common purpose
- results were not all positive: a significant number of agencies reported that implementing outcome measurement has led to a focus on measurable outcomes at the expense of other important results (46%), has overloaded the organization’s record-keeping capacity (55%), and that there has remained uncertainty regarding how to make program changes based on identified strengths and weaknesses (42%)
- Many foundations continue to struggle with how to integrate a range of measurement approaches into their decision making
- In recent years, however, there has been considerable progress in developing measurement and evaluation methods with numerous approaches being developed by prominent consulting firms
- a chorus of skeptical voices, particularly from practitioners, has suggested that while impact and outcome measurement appears to be “a good tool to help funders see what bang they’re getting for their buck,” it runs the risk of being counterproductive in the long run, both by drawing precious resources away from services and by putting too much emphasis on outcomes for which the causal links are unclear
- Conventional wisdom in the social sector suggests that one should measure results as far down the logic chain as possible, to outcomes and societal impacts.
- expectation is based on a normative view that organizations working on social problems, especially if they seek public support, should be able to demonstrate results
- it is worth considering whether, and to what degree, such measurement makes sense for all social sector organizations.
- every organization should at least measure and report on its activities and outputs, as these results are largely within its control.
- measuring outcomes is possible under two conditions that are uncommon in the social sector: when the causal link between outputs and outcomes is well established, or when the range of the integrated interventions needed to achieve outcomes are within the control of the organization.
- while a nonprofit or social enterprise may have an aspirational mission for what the world should look like, in practice its work is best captured by its more pragmatic operational mission.
- The operational mission, and how to measure progress towards achieving it, can be further understood by examining the scale and scope of the organization’s work.
- The scale of an organization’s operations can be expected to evolve with time.
- As it gains reputation and funding, the organization will be attracted to expand the reach of its operations.
- The second dimension, scope, is a measure of the range of activities required to address the need identified in the operational mission
- the notion of scope captures the set of activities necessary for addressing a social problem, while scale captures the target size of the problem.
- The problem itself is articulated in the organization’s operational mission
- Clarity on all three components—operational mission, scale, and scope—is necessary in order to know what to measure.
- Our performance framework suggests that social sector organizations should primarily focus on delivering against their operational mission
- all organizations should be capable of measuring the outputs of their operations
- only some will be able to go further to make credible and measurable claims about outcomes
- possible under two conditions:
- the organization implements a narrow scope of activities where the causal link between outputs and outcomes is clearly established through evidence
- the organization implements a broad scope of activities that is vertically integrated to increase control over outcomes
- time horizons for achieving outcomes can vary considerably
- Only rarely will organizations be in a position to go even further by claiming long-term sustained “impacts” on their communities and society.
- Scale in the social sector can be achieved not only through organizational growth, but also via a myriad of other means, particularly through influencing public policy and coalition building, training others to replicate and adapt its model, or even through the creation of new industries
- Expanding scope in the social sector, in order to increase control over outcomes, is not limited to vertical integration
- An alternate strategy is to partner with organizations that carry out complementary work along the results chain
- both options for improving performance—expanding scale or scope—can be achieved by collaborating with other organizations rather than by attempting to grow the organization
- Performance measurement does not operate in a vacuum and is the subject of much tension between operating organizations and their funders
- How do funders assess their own performance?
- The most critical challenges of performance measurement lie not at the level of operating organizations, but among aggregators such as foundations, governments, and impact investors. It is at this level—where the funder is able to oversee hundreds of operating organizations—that it is possible to measure societal impacts.
- While funders such as foundations, impact investors, and governmental aid agencies seek to assess the performance of their grantees or investees, it is less common for them to apply the same standards to measuring their own performance.
- It is unlikely that there is a single best way for funders to assess their own performance, or the collective performance of their grantees or investments.
- The core of our framework for measuring social performance is relatively simple:
- clarify the operational mission
- specify the set of activities to address that mission (scope)
- identify the target size of the problem (scale)
- such measurement is rare in practice
- Despite the conceptual simplicity of our model, we recognize that carrying it out poses significant challenges
- There is an urgent need for better knowledge on the challenges of scale and scope in the social sector.
- Social sector leaders and their funders are increasingly embracing performance measurement as critical to helping them achieve their missions at scale. They are shifting from a focus on evaluating impact after implementing their interventions, to also using measurement during program design and implementation in order to get real-time feedback for improving their work
- In terms of performance measurement methodologies, there has been a surge in the development of more participatory and integrative tools such as constituency feedback, most significant changes techniques, and developmental evaluation, as well as network-based approaches such as collective impact and outcome mapping
- suited to settings of high complexity involving interactions across multiple organizations and sectors
- In other words, the social sector is in a period of vibrant innovation on performance measurement.
- There is a unique opportunity for funders to integrate multiple levels of analysis —programmatic, organizational, and societal—in assessing and improving performance in the social sector
- can be done in at least two ways
- Funders should allocate greater resources to building the management capacity of nonprofits and social enterprise, and view themselves as part of a syndicate to enable mission success
- Funders should turn their attention to their own performance and impact, while easing off on their demands for operating organizations to prove their impacts
- Impacts on systemic societal problems are unlikely to be achieved by organizations acting alone; it thus makes more sense for funders rather than operating organizations to take on the challenge of measuring those impacts.
- nonprofits and social enterprises that operate in a niche should measure their outputs, and sometimes their outcomes, within that niche
- Funders that operate higher up in the ecosystem should measure impacts at a societal level
- Bridging these multiple gaps—between performance at the level of programs, organizations, and society—will require funders to think more strategically about the different organizations and programs they fund so that their collective investment can achieve systemic impacts with each individual piece playing its role
Evaluation for the way we work, Patton (2006)
- Helping people learn to think evaluatively can make a more enduring impact
- because evaluation typically carries connotations of narrowly measuring predetermined outcomes achieved through a linear cause-effect intervention, we want to operationalize evaluative thinking in support of social innovation through an approach we call developmental evaluation.
- designed to be congruent with and nurture developmental, emergent, innovative, and transformative processes.
- Helping people learn to think evaluatively can make a more enduring impact from an evaluation than use of specific findings generated
- learning to think and act evaluatively can have an ongoing impact.
- The right purpose and goal of evaluation should be to get social innovators who are, often by definition, ahead of the evidence and in front of the science, to use tools like developmental evaluation to have ongoing impact and disseminate what they are learning
- Developmental evaluation refers to long-term, partnering relationships between evaluators and those engaged in innovative initiatives and development.
- processes include asking evaluative questions and gathering information to provide feedback and support evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change
- Adding a complexity perspective helps being mindful about and monitoring what is emerging
- Complexity-based, developmental evaluation is decidedly not blame-oriented
- As a complexity-based, developmental evaluation unfolds, social innovators observe where they are at a moment in time and make adjustments based on dialogue about what’s possible and what’s desirable
- Summative judgment about a stable and fixed program intervention is traditionally the ultimate purpose of evaluation.
- None of these traditional criteria are appropriate or even meaningful for highly volatile environments, systems-change-oriented interventions, and emergent social innovation
- formative evaluation carries a bias about making something better rather than just making it different.
- Change is not necessarily progress. Change is adaptation.
- Complexity-based developmental evaluation shifts the locus and focus of accountability
- Traditionally accountability has focused on and been directed to external authorities and funders. But for value-driven social innovators the highest form of accountability is internal
- It takes courage to face the possibility that one is deluding oneself
Evaluation Flash Cards, Patton (2014)
- Evaluative Thinking
- Evaluation Questions
- Logic Models
- Theory of Change
- Evaluation vs. Research
- Dosage
- Disaggregation
- Changing Denominators, Changing Rates
- SMART Goals
- Distinguishing Outcomes From Indicators
- Performance Targets
- Qualitative Evaluation
- Triangulation Through Mixed Methods
- Important and Rigorous Claims of Effectiveness
- Accountability Evaluation
- Formative Evaluation
- Summative Evaluation
- Developmental Evaluation
- The IT Question
- Fidelity or Adaptation
- High-Quality Lessons Learned
- Evaluation Quality Standards
- Complete Evaluation Reporting
- Utilization-Focused Evaluation
- Distinguish Different Kinds of Evidence
IDSC
A Matrixed Approach to Designing IT Governance, Weill & Ross (2005)
- Without formal IT governance, individual managers are left to resolve isolated issues as they arise, and those individual actions can often be at odds with each other
- IT governance is a mystery to key decision makers at most companies
- Just one in three senior managers know how IT is governed
- When senior managers take the time to design, implement, and communicate IT governance processes, companies get more value from IT.
- Effective IT governance doesn’t happen by accident
- senior management awareness of IT governance is the single best indicator of its effectiveness
- IT governance can be assessed by evaluating how well it enables IT to deliver on four objectives: cost-effectiveness, asset utilization, business growth and business flexibility
- high IT governance performance correlated with the achievement of other desired measures of success
- IT governance encompasses five major decision domains.
- IT principles comprise the high level decisions about the strategic role of IT
- IT architecture includes an integrated set of technical choices to guide the organization
- IT infrastructure consists of the centrally coordinated, shared IT services that provide the foundation for the enterprise’s IT capability
- Business application needs are the business requirements for IT applications
- prioritization and investment decisions determine how much and where to invest in IT
- Each of these decision areas can be addressed at the corporate, business unit or functional level or some combination of the three
- first step in designing IT governance is to determine who should make and be held accountable for each decision area
- six archetypal approaches to IT decision making, ranging from highly centralized to highly decentralized:
- business monarchy - most centralized approach, senior business executive or a group of senior execs make all the IT-related decisions
- IT monarchy - decisions are made by an individual IT executive or a group of IT executives
- federal system - representatives of all the operating groups collaborate with the IT department
- IT duopoly - involves IT executives and a group of business leaders
- feudal system - business unit or process leaders make separate decisions
- anarchy - each individual user or small group pursues their own IT agenda
- Once the types of decisions and the archetypes for making those decisions are mapped out, a company must design and implement a coordinated set of governance mechanisms
- three kinds of governance mechanisms:
- Decision-making structures
- most visible IT governance mechanisms
- Different archetypes rely on different decision-making structures
- Alignment processes
- management techniques for securing widespread and effective involvement in governance decisions and their implementation
- Formal communications
- variety of ways: general announcements, formal committees, one-on-one sessions, intranets, etc.
- more communication generally means more effective governance
- Decision-making structures
- Well-designed, well-understood and transparent mechanisms promote desirable IT behaviors and individual accountability
- There is no single best model of TT governance.
- effective IT governance should be evident in business-performance metrics
- Centralized Approaches and Profitability
- The most profitable companies tend to be centralized in their approach to TT governance. Their strategies emphasize efficient operations
- desirable for IT governance to encourage a high degree of standardization in the pursuit of low business costs
- Key mechanisms include executive committees for decision making, centralized processes for architecture compliance and exceptions, enterprise-wide IT investment decision processes, and formal post-implementation assessments of IT-related projects
- Decentralized Approaches and Growth
- The fastest-growing companies are focused on innovation and time to market
- companies seek to maximize responsiveness to local customer needs and minimize constraints
- Accordingly, they require few governance mechanisms, often relying only on an investment process that identifies high-priority strategic projects and manages risk.
- Hybrid Approaches and Asset Utilization
- Companies seeking optimal asset utilization attempt to balance the contrasts between governance for profitability and governance for revenue growth and innovation
- focus on using shared services to achieve either responsiveness to customers or economies or scale - or both
- IT principles emphasize sharing and reuse of processes, systems, technologies and data
- hybrid approach to governance, mixing elements of centralized and decentralized governance
- typically rely on duopolies and federal governance design
- The hybrid approach is common, but it clearly demands a great deal of management attention
- Effective IT governance demands that senior managers define enterprise performance objectives and actively design governance to facilitate behavior that is consistent with those objectives.
- companies have mature business governance processes to use as a starting point in designing IT governance
- In order to use the framework effectively, management teams must first establish the context for IT governance
- means clarifying how the company will operate, how the company’s structure will support its operations and what governance arrangements will elicit the desirable behaviors
- Governance arrangements generally transcend organizational structure and can be more stable than structure.
- IT governance design should encompass four steps:
- Identify the company’s needs for synergy and autonomy
- Synergy-autonomy trade-offs force senior managers to make tough decisions and communicate those decisions throughout the enterprise
- establishes the parameters for the design of IT governance
- Establish the role of organization structure
- By establishing organizational priorities for autonomy and synergy, companies can introduce organizational designs and incentive systems that reinforce their priorities
- Identify the desirable IT-related behaviors that fall outside the scope of organizational structure
- rather than restructuring each time priorities shift, new governance mechanisms can force new behaviors without requiring reorganization
- governance mechanisms can provide organizational stability by demanding disciplined processes
- governance itself appears to become more stable as companies learn good governance practices
- Together, organizational structure and IT governance design can allow companies to achieve seemingly cont1jcting objectives
- IT investment decision processes can direct business unit priorities toward enterprise priorities by approving only projects that support enterprise strategies, even if organizational structures place responsibility for accomplishing project outcomes on business unit managers
- Thoughtfully design IT governance on one page
- When the objectives of IT governance are clear, companies can design IT governance by outlining mechanisms
- Companies that have not been effective in using IT strategically should expect to invest in organizational learning
- Identify the company’s needs for synergy and autonomy
- Effective IT governance certainly doesn’t happen accidentally. But companies that have followed the steps enumerated above have had demonstrable success designing, communicating and refining IT that creates real business value in their enterprises
Forget Strategy: Focus IT on your Operating Model, Ross (2005)
- Most companies try to maximize value from IT investments by aligning IT and IT-enabled business processes with business strategy. But business strategy is multi-faceted
- strategic priorities can shift
- As a result, strategy rarely offers sufficiently clear direction for development of stable IT and business process capabilities
- IT is left to align with strategic initiatives after they’re launched and becomes a persistent bottleneck
- To make IT a proactive—rather than reactive—force in creating business value, companies should define an operating model
- an operating model is the necessary level of business process integration and standardization for delivering goods and services to customers
- By identifying integration and standardization requirements an operating model defines critical IT and business process capabilities
- two important choices in the design of their operations:
- how standardized their business processes should be across operational units
- how integrated their business processes should be across those units
- four operating models:
- Diversification (low standardization, low integration)
- pursue different markets with different products and services, and benefit from local autonomy in deciding how to address customer demands
- Unification (high standardization, high integration)
- pursues the need for reliability, predictability and low cost by standardizing business processes and sharing data across business units to create an end-to-end view of operations and a single face to the customer
- Coordination (low standardization, high integration)
- creates a single face to its customers or a transparent supply chain without forcing specific process standards on its operating units
- Replication (high standardization, low integration)
- perform tasks the same way using the same systems, although operating units rarely interact
- Diversification (low standardization, low integration)
- By identifying the intended level of business process integration and standardization, the operating model determines priorities for development of digital capabilities and thus IT investment
- Although most companies can identify processes fitting every operating model, they need to select a single operating model to guide management thinking and system implementation
- One way companies respond to conflicting demands is to adopt different operating models at different organizational levels
- strong preference across companies and industries for the Unification model
- Data collected at 103 companies in 2004
- 63% targeting Unification
- 9% targeting Diversification
- 17% targeting Coordination
- 11% targeting Replication
- appeal of the Unification model is that it provides a thick foundation of digital capabilities to leverage in future business initiatives
- requires a great deal of time, money and management focus
- (Coordination and Replication) require less time for building capabilities before companies can start re-using them
- each operating model creates opportunities—but also creates limitations.
- The operating model concept requires that management put a stake in the ground and declare which business processes will distinguish a company from its competitor
- not choosing an operating model is just as risky
- In adopting an operating model a company benefits from a paradox: standardization leads to flexibility
- an operating model provides needed direction for building a reusable foundation for business execution. IT becomes an asset instead of a bottleneck
Building IT Infrastructure for Strategic Agility, Weill, Subramani, & Broadbent (2002)
- few choices more critical than deciding which IT investments will be needed for future strategic agility
- those choices can significantly enable or impede business initiatives
- investments by different business units are often made independently, often of a short term, catch up, or bleeding-edge nature, resulting in incompatible technologies
- Overinvesting in infrastructure leads to wasted resources, underinvesting translates into delays, rushed implementations, islands of automation, and limited sharing
- infrastructure investments are often shared across many applications, business initiatives and business units. But sharing requires negotiation
- Executives need a framework for making informed decisions about IT infrastructure
- The key finding: ln leading enterprises, each type of strategic agility requires distinct patterns of IT-infrastructure capability. And any company that can determine the type of agility it will need for specific business initiatives is more likely to make sensible infrastructure investments
- Once a company’s infrastructure is in place, there is a potential payoff: Competitors need long lead times to emulate
- tailored, strategy-enabling infrastructure can be reused for many business initiatives
- Whether to place the IT infrastructure capability in individual business units or make it enterprise-wide is a strategic decision
- An integrated IT infrastructure combines the enterprise’s shared IT capabilities into a platform for all business processes
- leading companies we studied tended not to establish their infrastructure through a few large one-time IT investments, but gradually, through incremental modular investments
- IT infrastructure is a collection of reliable, centrally coordinated services budgeted by senior managers and comprising both technical and human capability
- The services concept has advantages for the IT group, too, because infrastructure services remain relatively stable even when technical components change
- In analyzing the infrastructure services of the 89 enterprises in our study, we identified 70 different services in 10 clusters of IT-infrastructure services
- The first six clusters comprise the physical layer:
- Cluster 1: channel-management services
- Cluster 2: security and risk-management services
- Cluster 3: communication service
- Cluster 4: data-management services
- Cluster 5: application-infrastructure services
- Cluster 6: IT-facilities-management service
- four clusters that represent management-oriented IT capabilities:
- Cluster 7: IT-management services
- Cluster 8: IT-architecture-and-standards service
- Cluster 9: IT-education services
- Cluster 10: IT R&D services.
- Strategic agility is defined by the set of business initiatives an enterprise can readily implement
- research demonstrates a significant correlation between strategic agility and IT-infrastructure capability
- if managers can describe their desired strategic agility, they then can identify the IT-infrastructure service clusters that need to be above the industry average - and thus can create a distinctive competence
- industry leadership in implementing IT initiatives requires high-capability IT infrastructure in all three realms of the value net and that high levels of competence are essential in every cluster but IT education
- integrated infrastructure needed for strategic agility does not have to be enterprise-wide
- Notably, there is a conflict inherent in data management. For internal and supply-side initiatives, data management is best provided locally, but for demand-side initiatives, data management is needed enterprise-wide
- B2B and B2C initiatives require different patterns of high-capability infrastructure both in terms of which clusters are key and whether they are enterprise-wide or local.
- B2B interactions, all high-capability-infrastructure clusters tend to he managed at the business-unit level.
- B2C, such capabilities are centrally coordinated, with the emphasis on uniformity across business units
- differences in infrastructure capabilities depending on whether a company was pursuing initiatives in new products or new markets
- new-product initiatives, R&D and channel-management clusters were mostly local
- new-market initiatives required enterprise-wide service clusters
- implementing different types of electronically based business initiatives requires different high-capability IT infrastructures
- Strategic agility requires time, money, leadership and focus — and an understanding of which distinct patterns of high-capability infrastructures are needed where
- Underinvesting reduces strategic agility and slows time to market
- infrastructure investments usually must be made before investments in business applications because doing both at the same time results in infrastructure fragmentation
- if the infrastructure is not used or is the wrong kind, a company is overinvesting and wasting resources
- Successful enterprises get the infrastructure balance right because they make regular, systematic, modular and targeted investments in IT infrastructure on the basis of an overall strategic direction.
- critical for the enterprise’s most senior executives to understand which specific IT-infrastructure capabilities are needed for which kinds of initiatives
Building Enterprise Alignment: A Case Study, Fonstad & Subramani (2009)
Executive Summary
- IT-business alignment in multi-business-unit firms that have a federated IT structure involves two types of alignment—local alignment and enterprise alignment
- Local alignment efforts focus on serving the technology needs of an individual business unit and creating business value
- Enterprise alignment efforts involve coordinating potential economies and efficiencies across business unit
- three components to be key to successful enterprise alignment:
- Building the capabilities of the shared IT services group so it can provide infrastructure services
- Introducing opportunities for IT and business managers to collaborate
- Creating new mechanisms for business unit leaders to be better informed about IT investment trade-offs and corporate IT leaders to be better informed about the business value of specific shared services
Federated IT requires both local and enterprise alignment
- We define alignment as the process by which those responsible for managing information technology (IT) and stakeholders from the rest of a firm work together to achieve long-term business value
- multi-business-unit firms that have a federated IT structure require two forms of alignment: local and enterprise
- Although local and enterprise alignment share common elements, they differ significantly in the overall objectives of the working relationship metrics for success, the participants who need to be involved, the key interdependencies that participants manage, and the tools they draw on
- Firms that focus solely on local alignment to enable business responsiveness risk creating “IT infrastructure spaghetti”
- uncoordinated islands jeopardize both business-unit and enterprise-wide interests.
- To build enterprise alignment, IT and business managers must learn to work together
- lack of understanding of IT infrastructure investment causes problems in firms where IT is integral to business operations
- IT and business managers had worked together to build enterprise alignment by developing three components:
- Both the infrastructure group and the application development group strengthened their internal IT capabilities
- IT leaders created short-term and long-term engagement opportunities for IT and business stakeholder groups to collaborate
- IT and business participants created decision making tools for managing interdependencies
- Lessons learned
- Lesson 1: In firms with a federated IT structure, local alignment is insufficient; enterprise alignment is needed as well.
- Lesson 2: Manage the interdependencies between applications and IT infrastructure.
- Recommendations
- Strengthening Internal IT Capabilities
- Include operations and maintenance costs in project proposals
- Build project management capabilities
- Create a relationship management group
- Define services by their business role
- Enhancing Engagement Opportunities
- Introduce a short-term project with a clear objective to rally collaboration between key stakeholders.
- Assign clear roles and responsibilities for achieving both local and enterprise-wide objectives.
- Hold key decision makers accountable over the long-term by having them participate in regular meetings where the decision process and trade-offs are transparent.
- Improving Coordination of Interdependencies
- Produce tools that relate costs and benefits managed within business units to costs and benefits managed enterprise-wide.
- Create options that enable business-unit decision makers to share responsibility for managing trade-offs.
- Strengthening Internal IT Capabilities
- Enterprise alignment enables IT and business stakeholder groups to take greater control of shared resources and achieve synergies that no single IT unit can achieve on its own.
Teaming Up to Crack Innovation & Enterprise Integration, Cash, Earl, & Morison (2008)
- Successful innovation often depends on the ability to coordinate efforts across organizational boundaries because innovations reach sufficient scale and impact only when integrated into the larger operations
- CEOs today are asking their CIOs and IT organizations to play bigger roles in the growth agenda by providing the tools for collaborative innovation
- the work involves sometimes daunting challenges because business innovation and integration have something else in common – both are still “unnatural acts” in most large corporations. Businesses are better at stifling innovation than at capitalizing on it. The larger and more complex the organization, the stronger the status quo can be
- Specifically, we recommend the formation of two agencies:
- A distributed innovation group (DIG), which doesn’t “do” innovation but rather fosters and channels it.
- An enterprise integration group (EIG), dedicated to the horizontal integration of the corporation
- Sometimes these groups report to the CIO; sometimes they do not. Either way, they are home to some of the corporation’s most capable and experienced IT professionals
- The common recipe for increasing innovation predominantly focuses on generating and vetting new ideas. But that’s not the problem: Large corporations generate plenty of ideas. The problem is harvesting them, allocating the company’s vast resources to them, and managing their development in a coordinated and efficient way
- The DIG deploys entrepreneurial analysts to promote innovation in a variety of ways:
- Scouting for ideas with potential for the company
- Constantly scanning the external environment
- Facilitating participation in “ideagoras” – the online market places for problem solving
- Acting as a center of innovation expertise that advises business units on managing portfolios of innovation initiative
- Publicizing promising innovations and their progress toward implementation
- Serving as a temporary home for developing pilots or prototypes of promising innovations
- The DIG should be staffed with business-IT hybrids – people with sufficient depth in specific business processes to sharply focus innovation in those areas
- members should be widely dispersed across the enterprise
- full-time agents of innovation, with no other day job.
- they must be networked together
- must be adaptable, keeping its most skilled analysts deployed where the innovation action is
- To whom should the DIG report? If innovation is high enough on the corporate agenda, it should answer to the CEO. Otherwise, it might report to the R&D executive or perhaps to a business-unit or process executive
- We see many variations on the DIG theme
- Three types of IT capabilities are especially important to distributed innovation:
- Up-to-date understanding of emerging technologies and insight into trends, especially how technologies are converging to create radically new possibilities
- Mastery of iterative and experimental application development methods, including the creation of robust business simulation
- Facility with information dissemination and collaboration technologies
- the rest of the organization has three key innovation duties:
- Providing technology tools and infrastructure to support innovation initiatives.
- Providing skilled technical people for all substantial innovation initiatives
- Rapidly incorporating the new innovation’s information, systems, technology, and business logic into the corporate infrastructure
- requires a corporate mechanism that can overcome traditional silo resistance through its mandate and capabilities. That’s the work of an enterprise integration group.
- An effective EIG:
- Manages the corporate portfolio of integration activities and initiatives
- Serves as the corporation’s center of expertise in process management and improvement, large-project management, and program and portfolio management
- Contributes staff to major business integration initiatives – sometimes leaders, always coaches.
- Is responsible for enterprise architecture – the overall configuration and managed evolution of the company’s business processes, information, and technology
- Anticipates how operations might work in a more integrated fashion in the future and what management changes that might require
- One of the biggest challenges in forming an EIG may be gathering the right staff. You need people with a broad understanding of every piece of the business who can look at the enterprise systemically
- You also need people with experience in areas like enterprise-system implementation and information architecture
- You need people with pragmatic coaching skills, who can guide business partners
- you need people with very strong relationship-building skills, because business integration demands more than just a consensus about how things should work; it demands a commitment to operate differently
- If horizontal integration is a sufficiently urgent strategic imperative, the EIG should report to the CEO. More commonly, it reports to the COO
- What Integration Capabilities Do You Need?
- Ask yourself to what extent your organization has already mastered these capabilities, then configure your EIG to fill in the gaps
- Governance
- Relationship Management
- Program Management
- Architecture
- Process Skills
- Change Leadership
- Ask yourself to what extent your organization has already mastered these capabilities, then configure your EIG to fill in the gaps
- Three Technology Management Imperative
- success demands a comprehensive IT management strategy and infrastructure consisting of three elements:
- Business Platform
- Outside Services
- Web 2.0 (ed. note - oh, 2008)
- success demands a comprehensive IT management strategy and infrastructure consisting of three elements:
- IT organizations almost invariably have much more experience with enterprise integration than with distributed innovation
- IT people have intimate knowledge of the workings of the company, including the idiosyncrasies and hidden interdependencies between processes and data
- Their work requires them to take a systemic view of business-information and process flows
- “The executive team knows how business integration is supposed to work, but IT sees the detail level and can leverage that understanding.”
- Six sets of skills are central to the work of an EIG. The first five are often found in IT organizations. The sixth is rarer – and needs to be cherished and nurtured:
- Familiarity with the concepts and methods of business process design and improvement
- Experience with cross-functional systems implementation
- Competence in analyzing architecture
- Expertise in information management
- Experience with program management
- A talent for relationship management
- In equal measure, DIG and EIG need to connect and build relationships with internal change agents and with external partners and stakeholders
- Most members of both groups must be trilingual – fluent in the language of business, able to understand and translate the language of IT, and at ease with the natural language of sociability
- They also need to know the organization – how it works, who the movers and shakers are, and whom to ask for help in finding solutions
- These are rare beings, and competition for their services will be high
- neither the DIG nor the EIG can be very large – nor do they need to be
- Ideally, the members of these two groups will be drawn from the business and after a few years will return to operational roles
- Despite their commonalities, the two units operate in different spheres.
- The DIG enables the corporation to devise new ways to operate; the EIG enables the corporation to coordinate its operations to improve performance
- The DIG creates new business variations; the EIG takes yesterday’s new variations and folds them into the operating model of the enterprise
- The DIG injects novelty and variety; the EIG battles against fragmentation
- together they enable the corporation to evolve. And the company pursuing growth must excel at both
Written on April 1, 2018