1. Introduction

Qualitative studies are popular among accounting scholars. This context-rich approach expands our understanding of how accounting phenomena are created, experienced and interpreted within a complex social environment (Mason, 2012). However, qualitative research places heavy demands on the individual researcher particularly during the data analysis phase (Ahrens & Dent, 1998; O’Dwyer, 2008) with one such challenge being the large volume of data commonly generated. Analysis of large bodies of narrative text is time-consuming, resource-intensive and subject to interpretation bias reducing the external validity of such research (Indulska et al., 2012).

The objective of this paper is to encourage researchers, particularly doctoral students, to consider an analytical protocol that combines quantitative methods alongside traditional qualitative approaches. The major focus of the paper is to provide a detailed explanation of a systematic analytical protocol designed to reveal key connections between phenomena of interest. Rather than a conceptual discussion, the protocol is described within the context of the author’s own data analysis experience while undertaking a doctoral study. This provides a context to consider how the analytical protocol, namely C-Ratios (which quantify the relative strength of interactions between constructs by normalising the frequency of data coding co-occurrences), supported the research objectives. C-Ratios are an effective technique in managing the analysis of large volumes of narrative data and ensuring consideration is given to all data in interpreting data and drawing conclusions.

The following section discusses the separation that has emerged between quantitative and qualitative research. Section 3 discusses some challenges faced by qualitative researchers during data analysis. Section 4 details the application of quantitative analytical methods to qualitative data. Section 5 provides a detailed description of the use of C-Ratios as a key element of data analysis in one study. Finally, the paper concludes with a retrospective evaluation of the use of the technique and considers its merits and the problems encountered.

2. Separation between Quantitative and Qualitative Research

From a methodological perspective, research is commonly classified as either qualitative or quantitative and both approaches are valid (Myers, 2009). The distinctions between approaches are frequently stereotyped, for example: theory builders or theory testers; creators of horizontal or vertical knowledge; addressing questions regarding ‘how & why’ or ‘how often & how many’ (Malina et al., 2011; Pratt et al., 2020). In the context of accounting, studies tend to adopt either quantitative or qualitative methods (Ihantola & Kihn, 2011) and both methods fruitfully advance our understanding of accounting phenomena. However, accounting scholars have highlighted the worrying trend of separation between methodological camps (Euske et al., 2011; Modell, 2005, 2007). This has resulted in researchers being stereotyped as: ‘number crunchers’ or ‘naval gazers’; concerned with either ‘hard or squishy’ (Malina et al., 2011, p. 60). This divide has resulted in “methodological camps that do not communicate well to refine or modify our incomplete theories and knowledge of practice” (Malina et al., 2011, p. 60). As Modell & Humphrey (2008, p. 93) argue, the “dividing lines …. might have been over-drawn”. Research is fundamentally a process of building on prior knowledge and advances in knowledge depend on researchers extending ideas, results and procedures of peers in their research communities (Euske et al., 2011). The existence of separate methodological camps is concerning as it may potentially hamper scholarly advances.

Quantitative methods can be traced back to the natural sciences and are widely used in the social sciences and include for example, survey methods or experiments (Myers, 2009). These methods are appropriate for large data samples where the researcher is drawing on pre-existing research instruments to generate statistical testable data (Cresswell, 2009). Emphasis is placed on testing theory and hence the relationship between theory and research tends to be viewed as deductive. Studies of this type tend to focus on explaining phenomenon variance through confirmatory testing, making systematic comparisons, measuring and analysing causal relationships (Pratt et al., 2020; Silverman, 2005). The trends and patterns that emerge can then be generalised to a wider population (Cresswell, 2009; Myers, 2009). In this tradition, methodological transparency is critical to demonstrate trustworthiness (Pratt et al., 2020) as findings should be replicable.

In contrast, qualitative approaches emerged from the traditions of anthropology and sociology (Myers, 2009; Pelto & Pelto, 1978) and includes, for example, case studies and field studies. This type of research relaxes the rigours of quantitative methods and permits the researcher to explore complex relationships and to discover rather than confirm or test (Corbin & Strauss, 2008). According to Moll et al. (2006), the philosophical origins of qualitative methods stress the value of understanding human behaviours and social interactions in an organisational context. From an accounting perspective, qualitative studies “seek a holistic understanding and critique of lived experiences, social settings and behaviours, through researchers’ engagement with the everyday” (Parker, 2012, p. 55). Qualitative studies are motivated towards addressing in-depth issues and addressing ‘how’ and ‘why’ questions. Qualitative data has several strengths. For instance, it captures naturally occurring events within their natural setting providing ‘real-life’ understanding, rom which ‘thick descriptions’ emerge (Denzin, 1994) and it permits an exploration of underlying processes. Qualitative data is potentially rich, vivid and holistic which offers potential to uncover real-world complexities. However, from a practical perspective, balancing between the richness while providing the means to establish an adequate level of trustworthiness in the findings that emerge can be challenging (Modell & Humphrey, 2008). While qualitative and quantitative studies differ in the types of questions they address and how they are designed, both are widely used by accounting researchers. In the following section, attention focuses on challenges that arise at the data-analysis stage of qualitative studies.

3. Challenges Faced by Qualitative Researchers During Data Analysis

The data-analysis phase is onerous on qualitative researchers (Ahrens & Dent, 1998; O’Dwyer, 2008). One of the foremost challenges, particularly for inexperienced qualitative researchers, lies in how to manage and analyse large volumes of qualitative data collected (Yin, 2009). As it is not uncommon for interviews to transcribe to hundreds or even a few thousand pages of transcribed text, such volumes of evidence can quickly become overwhelming (Mason, 2012; O’Dwyer, 2008). Referring to the vast quantities of data gathered in case study research, Pettigrew (1988, p. 98) fittingly warns: “there is an ever-present danger of death by data asphyxiation”. This is concerning as Faust (1982) observes that humans struggle to process large volumes of data as it overloads our information-processing capabilities. A perceived lack of progress can lead the researcher to feeling disheartened and studies becoming stalled (McQueen & Knussen, 2002). Furthermore, researchers may respond by arriving at rushed, partial and unfounded conclusions (Miles & Huberman, 2014) as they strive to reduce complexity into more manageable configurations.

Qualitative data is often perceived as lacking a standardised structure (Eisenhardt, 1989; Lillis, 1999, 2006; Smith, 2015) taking the form of extended narratives characterised as dispersed, bulky and sequential rather than simultaneous (Miles & Huberman, 2014). This point is particularly relevant when data is collected through semi-structured interviews (Mason, 2012). The lack of structure reflects the inherent flexibility of qualitative data collection techniques. For example, the extent to which an interviewer needs to probe specific issues to ensure a complete understanding is likely to vary at each interview.

These challenges are further amplified by the absence of established techniques to ensure that qualitative data analysis is carried out in a way that is impartial and complete. It is important that all data collected is analysed and considered equally (Lillis, 1999, 2006; Miles & Huberman, 2014). It is tempting to be drawn towards data that corresponds seamlessly with theory; or quotes that encapsulate the essence of what we are studying (Lillis, 2006). We instinctively recall more clearly vivid events, contextually rich stories and exciting descriptions in contrast to more mundane passages (McQueen & Knussen, 2002; Tversky & Kahneman, 1973). This is problematic if such quotes are not representative of the entire data set and may lead to the introduction of bias. Further potential for bias exists as qualitative data-analysis is dependent on coding classifications prescribed by researchers themselves (Lillis, 1999).

In addition, readers of qualitative research have limited opportunities to confirm the accuracy of the process of analysis and interpretation. As Marginson (2008, p. 334) points out, only researchers possess the entire data set from which conclusions have been drawn. In contrast, readers have limited scope to confirm the accuracy of researchers’ analysis. Researchers must gain the readers’ trust by convincing them that the data analysis has been systematically constructed (Seale, 1999), and conducted in a consistent manner. Both the underlying logic of analytical choices made and the procedures and practices followed must be well-explained (Mason, 2012). I discuss strategies to manage qualitative data analysis challenges in the next section.

4. The Application of Quantitative Analytical Methods to Assist in Analysing Qualitative Data

“In qualitative research, numbers tend to get ignored. After all, the hallmark of qualitative research is that it goes beyond how much there is of something to tell us about its essential qualities. However, a lot of counting goes on in the background when judgments of qualities are being made. When we identify a theme or a pattern, we’re isolating something that (a) happens a number of times and (b) consistently happens in specific ways. The “numbers of times” and “consistency” judgments are based on counting. When we make a generalization, we amass a swarm of particulars and decide, almost unconsciously, which particulars are there more often, matter more than others, go together, and so on. When we say something is “important” or “significant” or “recurrent” we have come to that estimate, in part, by making counts, comparisons, and weights.”

(Miles & Huberman, 2014, p. 253)

A strategy to overcome the challenges associated with analysing qualitative data is to look beyond traditional qualitative methods of analysis and consider quantitative techniques as a means of providing support. Miles & Huberman (2014) advocate, quantification and ‘numbers’ to complement analysis. In practice, this means combining qualitative analysis with quantitative analysis either concurrently or sequentially (Cresswell, 2009). Advocates suggest that a broader analytical approach enhances completeness, reduces the potential for bias, improves reliability resulting in more convincing and accurate conclusions emerging (Cresswell, 2009; Ihantola & Kihn, 2011; Lillis, 2006). Quantification underpins several qualitative analytical protocols used by qualitative researchers: content analysis, matrices and C-Ratios each of which I discuss in turn.

4.1 Content Analysis

Accounting researchers have successfully used the technique of words and phrases (or both) counts or frequency for content analysis in the analysis of archival data (Smith, 2015). This provides a systematic method to analyse the content of text (Steenkamp & Northcott, 2007). The technique involves counting key words, phrases or both. Frequencies are then analysed thereby introducing elements of quantification into the process of analysing qualitative data (Easterby-Smith et al., 2002). For example, Smith & Taffler (2000) use a key word ratio variable (number of common occurrences of keyword/total number of words in the narrative section of the chairman’s statement) to indicate the perceived importance of keywords from a selection of chairman’s statements.

According to Malina et al. (2011), content analysis has used widely by accounting scholars, particularly in the area of corporate reporting and finance (for example, in examining the narrative sections of annual reports and corporate communications (Abrahamson & Amir, 1996; Edgar et al., 2018; Merkl-Davies et al., 2011; Moreno et al., 2019; Tennyson et al., 1990)). However, quantifying the relative importance of words, phrases or both, such as the approach used in Smith & Taffler (2000), tends to be confined to analysing archival data or documentary evidence rather than interview data.

4.2 Matrices

To overcome the challenges inherent in making sense of masses of qualitative data, Lillis (1999) encourages the use of a systematic analytical approach of structured data display as described by Miles & Huberman (2014). Matrices are a form of data display, defined as the “crossing of two or more main dimensions to see how they interact” (Miles & Huberman, 2014, p. 239). They suggest that the process of creating matrices is both creative and systematic. Mason (2012) observes that the selection of dimensions represented on each axis represent important decisions, as they reflect the application of interpretative principles.

The technique aids analytical thinking, making it easier to identify connections or relationships that exist within data (Mason, 2012). Lillis (1999) points to the usefulness of matrix displays in enhancing trust by: establishing an audit trail from interview transcriptions to results ensuring that all cases are evaluated, and assist in revealing new empirical-based propositions. Reflecting on her own use of matrix displays, Lillis (1999) observes how the matrix data displays establish a disciplined approach that enhance completeness and impartiality. Similarly, O’Dwyer (2008) writes positively of his own experiences using matrices. Specifically, detailed coding combined with overviews obtained through matrices, facilitates a holistic view of the data while also bringing to light interrelationships and contradictions in the data. Furthermore, O’Dwyer (2008) observes how the thoroughness of his data analysis protocol instilled confidence allowing him to be more convincing in articulating his arguments.

4.3 Co-occurrences and C-Ratio

Quantifying the overlap between interview data coded to multiple codes can assist researchers in assessing the strength of the relation between constructs of interest. Overlap is measured through co-occurrences or C-Ratio which are proxies for the level of interaction, between empirical data, coded by researchers as relating to two or more constructs of interest. The range of possible C-Ratio is between 0 (indicating that codes do not co-occur) and 1 (indicating these two codes completely co-occur). Therefore, interrelationships with a high (low) C-Ratio reflect a high (low) interaction between constructs. C-Ratios are calculated using the formula (https://doc.atlasti.com/ManualWin.v9/ATLAS.ti_ManualWin.v9.pdf):

C=n12/(n1+n2n12)

Where n12= co-occurrence frequency of two codes code 1 and code 2,n1 represents occurrence frequency of code 1 and n2 represents occurrence
frequency of code 2.

The calculation of the C-Ratio is based on approaches borrowed from quantitative content analysis. Code co-occurrences are typically displayed within a data matrix to produce a ‘Code Co-Occurrence Table’. Data analysis software packages, such as Atlas-ti or NVivo, allows users to drill down to retrieve the actual quotations underlying the matrix. In the context of accounting, only a small number of studies have applied this technique. An overview of each follows.

Malina and Selto (2004) examine a large manufacturing company’s efforts to improve profitability through the design and use of a performance measurement model. They coded interview data using data analysis software (Altas-ti) according to whether the interview comments were positive or negative with respect to specified desirable attributes of performance measures identified in the literature. The authors present the co-occurrences between favourable comments relating to specific attributes and unfavourable comments related to other attributes within the same interview text (for measures that earlier analysis identified as being omitted from the design of the performance measurement model). Stronger co-occurrences pointed to a greater trading-off effect and the authors could identify the key trade-offs between attributes. For example, within the data, the attribute of improved decision making was considered subservient (and omitted) relative to objectivity and accuracy attributes.

Malina & Selto (2015) utilises C-Ratios to quantify the interaction between empirical data constructs to identify key factors associated with performance measure model longevity. They first code data to Ferreria and Otley’s (2009) management control framework (key performance measures, target setting, performance evaluation, reward system and information flows) and second, to three behavioural-economic nudges (anchoring and adjustment, availability, conformity and framing).

In the Malina and Selto (2015) study data analysis focused on examining the use of behavioural-economic nudges in an enduring performance measurement model. The authors summarised the frequency of interview excerpts coded to constructs (1,541 in total), within Ferreria and Otley’s (2009) framework (718) and the behavioural economic nudges (877). They argued that code frequencies are proxies “for the overall perceived importance of posited constructs” (p. 35). Analysis proceeded to present a matrix summarising the frequencies which the same text is coded to both categories of constructs resulting in 1,721 co-occurring codes: “Co-occurrences are proxies for interactions of concepts underlying the codes. High co-occurrence frequencies indicate the importance of interacting concepts” p. 36. The authors use C-Ratio (as defined above) to measure the intensity of the interaction between behavioural-economics nudges and the design and use of a performance measurement model. While acknowledging that there is no standard significant level of C-Ratio, the authors focus on interactions with C-Ratios of 0.25 or greater as they suggest, interactions which a relatively higher C-Ratio, reflect the most likely drivers of performance measurement models’ longevity. The evidence regarding co-occurrence within interview content identified four behavioural nudges present in the design and use of the performance measurement model.

Lillis et al. (2017) employ C-Ratios during data analysis to tease out how subjectivity emerges and become informative within performance measurement and reward systems. Interview data was coded to concepts from existing theoretical frameworks using NVivo. The authors perform two subsequent rounds of coding. In the first round, data is coded to informativeness criteria identified in the incentive contracting literature (effort intensity, effort direction, isolating agent effort and congruity). Following this, data is coded to subjective interventions (subjective measures, subjective initial rating, subjective final rating and subjective rewards) with 1,952 data codes assigned to subjectivity, of these 870 related to informativeness codes and 1,082 related to subjective intervention codes. For a more insightful analysis, Lillis et al. (2017) employ a Co-occurrence and C-Ratio technique.

“Evidence in qualitative studies is generally conveyed through the use of quotations along with researcher interpretations of broader patterns in data. It is difficult to convey the ‘weight’ of evidence using this approach, and quotations can only constitute examples from extensive narratives that form the field study data base. To address the challenge of how to present the ‘weight’ of evidence in qualitative data, we adopt a technique which allows us to quantify patterns in the data.” (Lillis et al., 2017, p. 19)

Using capabilities within the qualitative data software, Lillis et al. (2017) produce a “matrix of the frequencies of which all code pairing was applied to the same narrative” (p. 18) these Co-occurrences act as proxies for interactions of concepts underlying the codes. This reveals a total of 1,009 co-occurring narratives among informativeness codes and subjectivity codes. The co-occurrences were translated into C-Ratios. High (or low) C-Ratios indicate the importance of interacting concepts (low interaction) in interviewee narratives. The authors set an arbitrary C-Ratio cut-off of 0.20. Six interactions (pairings or cells) meet this cut-off and the findings section is structured around each of these salient interactions (for example one interaction involved subjective measures and effort intensity, while another interaction consisted of subjective initial rating and effort direction). This approach supports Lillis et al. (2017) to focus on “co-occurring codes most worthy of further investigation” (p. 19).

Identifying key interactions between data constructs is a common data-analysis objective in Malina & Selto (2004, 2015) and Lillis et al. (2017). These studies all use Co-occurrence and C-Ratios to systematically measure the presence and extent of overlap in data coded to multiple codes. More importantly, C-Ratios permit easy identification of important interactions between constructs of interest. Section 5 focuses on the application of C-Ratios to support data analysis in the context of a doctoral research study.

5. Application of the Structured Analytical Approach and C-Ratio Protocol in A Phd Study

Co-occurrences and the C-Ratio technique described in Section 4.3 formed a key component of data analysis in a PhD study (Martyn, 2018). This section provides a detailed description of the use of Co-Occurrences and C-Ratios as part of the data analysis process. The broad objective of the PhD was to examine how management control systems guide middle managers’ actions. The management literature suggests that organisational performance is primarily driven by what happens at the middle rather than at top levels of organisational hierarchies (Currie & Procter, 2005). From a management control perspective only a handful of studies have investigated how management control systems are used to steer the work of middle managers. Both the management literature and the management control systems literature share a related primary interest, improving organisational performance, yet the two streams of literature remain largely discrete. The motivation for the study was to draw together these two streams of literature to advance understanding of how management control systems guide middle managers’ efforts in contemporary organisations. The study draws on two theoretical frames. First, Simons’ (1995) levers of control framework which conceptualises four levers of control that senior managers use to realise strategy: beliefs system, boundary system, control systems used in an interactive manner and control systems used in a diagnostic manner. Second, Floyd and Wooldridges’ (1992, 1997) middle managers’ strategic roles typology, which categorises the span of middle managers influence on strategy: implementing deliberate strategy, championing alternatives, synthesising information and facilitating adaptability.

Using a multiple case study design (two firms from the medical device sector and two firms in the information technology sector), 43 in-depth semi-structured interviews were conducted with middle managers working in different functional areas. One of the objectives of the study was to examine how modes of control characterised in the levers of control (beliefs systems, boundary systems, interactive control systems and diagnostic control systems) steer different middle manager activities identified in the management literature (implementation of deliberate strategy, synthesising information, championing alternatives and facilitating adaptability). Determining the level of interaction between control levers and middle manager strategic roles was a key concern during data analysis. In addition, the intensity of the interactions between control levers and middle manager strategic roles reveals the extent to which individual control levers steers specific strategic action at middle management level. Applying quantitative analytical procedures, discussed in section 4.3, aligned well with the intent underpinning data analysis in the study.

The structured data analysis process compromised of ten phases of analysis. For completeness, all ten distinct phases of analysis are included, which broadly consist of three concurrent activities (data reduction, data display and conclusion drawing) identified by Miles & Huberman (2014). Less emphasis is given to phases 1-7 as they are relatively standard across qualitative studies. In contrast, more detailed attention is given to phases 8-10 to provide a detailed account of the application of the C-Ratio technique in an empirical study.

Phase 1: Initial Engagement with the Data

In line with Eisenhardt (1989) and Bryman & Bell (2003), interview transcripts were read several times and on-site field notes were reviewed to develop an intimate knowledge of the data (O’Dwyer, 2008).

Phase 2: Creation of Database of Transcripts

Given the large volume of narrative data (interview transcripts totalled 1,290 pages of text) collected during this study, a database of interview transcripts, audio files and field notes were created within NVivo version 10 (qualitative data analysis software package). In addition, both case and interviewee attributes (industry sector, gender and role categorisation) were also recorded.

Phase 3: First Round of Coding

The initial round of coding followed a deductive analysis approach (Moll et al., 2006) reflecting the study’s strongly theory-driven nature (relying on two theories: levers of control and middle managers’ strategic role typology). Data was coded to category codes constructed from the levers of control framework and broad participant-driven codes.

Phase 4: Descriptive Write-ups

Following Eisenhardt (2002), descriptive write-ups (including extensive use of summary tables) organised around case and theoretical properties were prepared. Extracting data from NVivo, based on code categories identified in Phase 3 supported this process. The descriptive write-ups helped to identify: recurring themes and patterns; similarities and differences between cases; and how the levers of control were influencing interviewees.

Phase 5: Second Round of Coding

Coding expanded to create data-driven sub-categories to capture behaviours and implications. Using the reporting functionality within NVivo, data was extracted within identified code categories and was subsequently used to prepare data summary tables.

Phase 6: Third Round of Coding

Round three of coding mapped interview data against the characteristics of the middle management strategic involvement typology (Floyd & Wooldridge, 1992, 1997) identified in the literature. Patterns in how the levers of control could be linked to specific middle manager strategic roles began to emerge.

Phase 7: Review of Volume of Data

By this point, the researcher had spent a prolonged period immersed in analysing the large body of evidence gathered from the field. It was time to reflect on the progress to date and challenges experienced. While the three rounds of coding had certainly aided the researcher in gaining insights, coping with the large volume of data was becoming overwhelming and ‘data asphyxiation’ (Pettigrew, 1988) was setting in.

Phase 8: Adoption of Matrix Approach

To overcome the challenge of analysing large volume of data gathered from four case studies, the researcher applied a structured data display (matrix) approach recommended by Miles & Huberman (2014) as a data reduction tool. As discussed in Section 4.2, this technique aids analytical thinking, the identification of connections or relationships that exist within data, rendering it a suitable approach for the study. Matrices would support the analysis by crossing dimensions, in this case the individual control levers, against middle management strategic roles. Lillis (1999) suggests that research questions should guide matrix design parameters. This recommendation was adhered to in this study: matrix columns represented each of the four levers of control (Beliefs Systems, Boundary Systems, Diagnostic Control Systems and Interactive Control Systems) while matrix rows corresponded to middle management strategic roles (Facilitating, Implementing, Championing and Synthesising). Selecting dimensions represented on each axis is not analytically neutral; the researcher had to reflect on how this would affect the process of interpreting the data and its appropriateness in terms of addressing the questions posed in the study.

Based on the iterative coding process (phases 3, 5 and 6), significant segments of interview narrative were coded to both the levers of control and middle manager strategic role codes in NVivo. More in-depth analysis of these coding overlaps was key to gaining focused insights relevant to addressing the research objective. Using NVivo’s Crosstab functionality, the researcher constructed a two-way matrix template. Table 1 illustrates the outline of the template matrix structure (unpopulated). Using this matrix template, matrices were generated from interview data coded in NVivo for each case firm, role categorisation (marketing, finance, manufacturing) and industry sector (IT and medical device). As the matrices crossed two dimensions (levers of control and middle manager strategic roles), they permitted a greater understanding of the extent to which the two dimensions interact revealing, as Lillis (2006) suggests, links between empirical observations and theory.

Table 1.Matrix Template
Middle manager strategic roles (Floyd & Wooldridge, 1992, 1997) Levers of control (Simons, 1995)
Belief systems Boundary systems Diagnostic control systems Interactive control systems
Facilitating
Implementing
Championing
Synthesising

Phase 9: Use of C-Ratios

To build further on this line of analysis further and assess the strength of the interaction between the individual control levers and middle manager strategic roles, the researcher applied a systematic approach to reveal the “weight of evidence” following a procedure used in previous studies outlined in Section 4.3 (Lillis et al., 2017; Malina & Selto, 2004, 2015). This approach involved calculating C-Ratios to quantify the relative strength of interactions between the two dimensions of interest in this study. Several steps were required. First, using NVivo’s Query functionality, the researcher counted instances of narratives coded to levers of control and middle manager dimensions to create a code frequency table (Table 2).

Table 2.Sample Code Frequency Summary Report by Case Firm
Case Firm 1 No. Case Firm 2 No. Case Firm 3 No. Case Firm 4 No. Total No.
Facilitating 0,169 0,228 0,235 0,145 0,777
Implementing 0,242 0,216 0,291 0,322 1,071
Championing 0,169 0,218 0,069 0,097 0,553
Synthesising 0,069 0,098 0,109 0,081 0,357
Total role codes 0,649 0,760 0,704 0,645 2,758
 
Beliefs system 0,073 0,079 0,074 0,074 0,300
Boundary system 0,068 0,074 0,062 0,070 0,274
Diagnostic control system 0,083 0,141 0,089 0,096 0,409
Interactive control system 0,058 0,098 0,057 0,075 0,288
Total levers of control codes 0,282 0,392 0,282 0,315 1,271

Second, the researcher developed a two-way crosstab display in NVivo to summarise the co-occurrences between levers of control and middle manager strategic roles, forming a matrix of coding co-occurrences as illustrated in Table 3. Essentially, this summarised the frequency that discrete narratives had been coded to each code pairing (meaning the narrative was coded to both the lever of control and the middle manager strategic role during phases 3, 5 and 6). For example, 105 discrete narratives were coded to both beliefs systems and middle management’s facilitating role.

Table 3.Sample Co-occurrence between Data Coded to Both Middle Manager Strategic Roles and Levers of Control Matrix Report
Middle manager strategic role Levers of control
Beliefs systems Boundary systems Diagnostic control systems Interactive control systems
Facilitating 105 065 084 058
Implementing 145 089 404 101
Championing 067 096 055 053
Synthesising 014 083 014 173

Third, by translating co-occurrence frequency measures into a relative measure, the researcher was able to evaluate the strength of specific interactions. To normalise the coding co-occurrence frequency (Lillis et al., 2017) absolute co-occurrence measures were adjusted to C-Ratios using the formula: C12=n12/(n1+n2-n12). The C-Ratio between beliefs system and middle managers’ facilitating role is calculated as follows: 105/(777+300-105) = 0.11 where 105 (shaded in Table 3) is the co-occurrence between beliefs systems and facilitating role, 777 (shaded in Table 2) is the narrative coding frequency of facilitating and 300 (shaded in Table 2) is the narrative coding frequency of beliefs system.

As highlighted in Section 4.3, a C-Ratio has a value between 0 (indicating no co-occurrence) and 1 (indicating complete co-occurrence). Therefore, interrelationships with a high (low) C-Ratio reflect a high (low) interaction between underlying dimensions. For instance, in Table 4, the data reveals a strong interaction (C-Ratio of 0.38) between narratives coded to the diagnostic control system lever and middle managements’ implementing strategy role. In contrast, there is little interaction (C-Ratio of 0.02) between narratives coded to the beliefs systems and middle managements’ synthesising role.

Table 4.Sample Report Providing a Summary of C-Ratios
Middle manager role Levers of control
Beliefs systems Boundary systems Diagnostic control system Interactive control system
Facilitating 0.11 0.07 0.08 0.06
Implementing 0.12 0.07 0.38 0.08
Championing 0.09 0.13 0.06 0.07
Synthesising 0.02 0.15 0.02 0.37

Phase 10: Final Write-up of Findings

Matrices aided the researcher in identifying the linkages in the empirical evidence, specifically which control levers interacted with particular middle management roles, and perhaps more importantly, the C-Ratio technique quantified the strength of the interaction or what Lillis et al. (2017) refer to as the weight of the evidence. As this analysis procedure was applied to the entire dataset and in turn to cross-sections of data (each firm, each role category, and each industry sector), this enabled the researcher to find patterns, make contrasts and comparisons. Using this technique, salient interactions were immediately evident (higher C-Ratios) and this served as a strong guide during the write up process. For example, the C-Ratio analysis revealed a strong interaction (0.37) between middle managers’ synthesising strategic role and control systems used in an interactive manner. This suggest that interactive control systems are a salient mode of control in guiding middle managers’ synthesising activities. Similarly, the relatively high C-Ratio (0.38) observed between diagnostic control systems and middle managers’ role in implementing strategy indicates that diagnostic control systems exert a strong influence on middle managers as they implement organisational strategy.

The C-Ratios displayed in Table 4 summarise the relative weight of interaction between code pairings, the researcher could, at the ‘press of a button’ in NVivo, drill down into the detailed narratives permitting ready access to relevant quotations. The ease with which the researcher could switch between the ‘high level’ and drill down into granular detail aided the write up process.

6. Conclusions and Lessons Learned

This paper describes ten phases of data analysis in a structured analytical approach to analyse qualitative data using matrices (Miles & Huberman, 2014) and calculation of C-Ratios. The study sought to enhance understanding of how different modes of control steer middle managers to fulfil several strategic roles. The C-Ratio technique, in quantifying the frequency and strength of the association between the two constructs, helped to illuminate on how this happens in practice. Furthermore, the process signalled associations that were, as Miles & Huberman (2014) suggest, ‘important’ and ‘significant’. This ‘signposting’ helped to verify the researcher’s intuitions about key linkages within the data.

On reflection, the analytical protocol followed was useful for several reasons. First, the protocol helped to alleviate the challenge of analysing substantial volumes of interview data. The process of constructing the matrices followed a structured procedure. This contrasted with the more reflective nature of the preceding phases. The two approaches, reflective and quantification, combined well, allowing scope for rich descriptions to emerge while preserving focus on key connections. Second, the analytical protocol was also helpful in pacing the analysis. Each cell in the matrices represented an interaction to potentially tease out further, albeit that some warranted more attention than others. This configuration provided a natural way to structure deeper analysis, one interaction at a time. Working through each interaction in a paced approach made the task at hand far more manageable and served as a reference to assess progress. O’Dwyer (2008) observes that, irrespective of the process of analysis, significant perseverance is necessary. A cell at a time tactic served to regulate the researcher’s perseverance. Third, the researcher was aware of the requirement to analyse the data in a complete and unbiased way. The process adopted supported these aims. Matrices effectively responded to the issue of completeness as all data was evaluated. As Lillis (2002, p. 511) points out, analysis is “built on interpretation…. therefore potentially subject to considerable bias”. The ability of the C-Ratios to capture the strength of interaction (weight of the evidence) between dimensions helped prevent bias. Fourth, the task of making inferences and drawing conclusions rests firmly with the researcher. The C-Ratios bolstered intuitive insights and enabled the researcher to be more confident in the interpretation and claims made (Miles & Huberman, 2014). Learning to construct convincing arguments is essential during the PhD journey. The analytical process supported that aim.

In summary, the analytical choices made were matched with the objectives of the research as focus rested on understanding the interconnection between specific control levers and specific middle manager strategic roles. The importance of designing an analytical approach that supports the researcher in addressing their research questions cannot be over stated. While this study drew on a quantitative technique to support qualitative data analysis, the author does not claim the use of this method of data analysis as superior to others; merely that it was appropriate and justified in the context of this PhD study.

Some drawbacks in using co-occurrences and C-Ratios surfaced. First, constructing the matrices and C-Ratios together, with the associated preparatory work, represented a considerable investment in time. The process was repetitive; prepared at an overall summary level (illustrated in Tables 1 to 4) but also for each of the four case firms, each of the five role categories and two industry categories. Second, the structure of the analysis did not easily reveal how control levers worked in combination to steer middle managers’ strategic efforts. It is widely recognised that, in practice, individual control mechanisms are interrelated. Examining an interaction between one control lever and one strategic role did not capture this and further analysis was necessary. The process is heavily reliant on a well thought through and consistently executed coding structure. In addition, the use of the technique required a detailed explanation in the methods chapter of the thesis. Explaining in detail what was done, and why it was done in the selected way (Modell & Humphrey, 2008), was necessary to ensure that the reader could trust the research.

Sharing the analytical approach in this paper is motivated by several aims. First, to encourage researchers to consider, where appropriate, supplementing qualitative analysis methods with protocols that would more traditionally be associated with the quantitative domain. Second, the paper illustrates a technique for managing large volumes of data typically gathered in a qualitative study at doctoral level. This may be helpful to other PhD researchers who find themselves becoming overwhelmed at the critical data-analysis stage. Furthermore, advances in Artificial Intelligence technology in the area of auto-transcribing means that volumes of interview data can now be transcribed relatively easily, making interview data potentially more attractive as a data source for doctoral students. Third, the analytical approach struck a balance between, on one hand, the need to allow rich findings (the creative) to emerge, while on the other hand, simultaneously providing the means to establish adequate trust in the findings (Modell & Humphrey, 2008).


Acknowledgements

I am grateful to Breda Sweeney for her comments on an earlier version of this paper. Furthermore, I appreciate the helpful feedback of the editor and two anonymous reviewers. I gratefully acknowledge the guidance given by Anne Lillis in applying the C-Ratio technique.