Aggregating Project Level Performance Data into Organisation and Industry Insight

International Research Network on Organizing by Projects (IRNOP) 2017, 11-14 June 2017
Published by UTS ePRESS | http://pmrp.epress.lib.uts. edu.au


CONFERENCE PAPER

Kimiyoshi Oshikoji1*, Bjørn Andersen2

1Norwegian University of Science and Technology (NTNU). kimiyoshi.oshikoji@gmail.com

2Norwegian University of Science and Technology (NTNU). bjorn.andersen@ntnu.no

*Corresponding author: Kimiyoshi Oshikoji, Norwegian University of Science and Technology (NTNU). kimiyoshi.oshikoji@gmail.com

Name: International Research Network on Organizing by Projects (IRNOP) 2017

Location: Boston University, United States

Dates: 11-14 June 2017

Host Organisation: Metropolitan College at Boston University

DOI: https://doi.org/10.5130/pmrp.irnop2017.5687

Published: 07/06/2018

Citation: Oshikoji, K. and Andersen, B. 2017. Aggregating Project Level Performance Data into Organization and Industry Insight. International Research Network on Organizing by Projects (IRNOP) 2017, UTS ePRESS, Sydney: NSW, pp. 1-14. https://doi.org/10.5130/pmrp.irnop2017.5687

© 2018 by the author(s). This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) License (https://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.


Synopsis

This research article is an initial examination into a performance measurement system recently introduced in the Norwegian construction industry. The 10–10 Performance Assessment Program developed by the Construction Industry Institute has been created to meet the growing demands for a more comprehensive benchmarking and measurement system in the construction sector. Empirical analysis has been conducted on real project data from 14 Norwegian companies that were invited to use the system in order to better monitor and improve their project success.

Research Design

The design of this study is primarily quantitative in nature given that project data gathered as part of the 10–10 system was explored in regards to seeing whether there were any significant correlations between the various components of the system consisting of input measures, output measure, survey questions and project characteristics.

Relevance for practice and education

Projects play an integral role in many industries and therefore being able to better monitor and improve their chance of success is of high value to society. Through the analysis in this article, a better understanding has been obtained of factors that can help projects succeed within the construction industry as well as other industries on a broader scale.

Main Findings

The findings reveal that there were several key metrics of the 10–10 system that were strong indicators of cost overrun in the project. These indicators spanned a wide variety of areas such customer satisfaction, project team competence, and relationships between various project stakeholders among others. Additional findings have also been discussed in line with future research efforts and the construction industry’s need for broader sector analysis.

Research Implications: The 10–10 performance measurement system can be leveraged to improve project not only on an individual project level but on the aggregate organizational and industry levels as well. Identifying and utilizing such performance systems that are able to interconnect these various levels is key towards carrying out more successful projects in the future.

Keywords

Performance Measurement, Construction Industry, Project Management, Project Success

Introduction

Many performance measurement systems are being utilized today in the construction sector, some of which are arguably more effective than others. In the construction industry, the three most used performance measurement systems are the European Foundation for Quality Management (EFQM) excellence model, the Balanced Scorecard (BSC) model, and the Key Performance Indicators (KPIs) model (Yang et al. 2010). A challenge facing many organizations today is selecting the right performance measurement system to meet the various internal and external demands. At the same time, globalization and an increase in competition in the international markets requires there to be a broad overview for how the industry is performing on the sector or national level to sustain competitive advantage. There is a need to develop a more comprehensive performance measurement framework that better suits the construction industry (Bassioni, Price & Hassan 2004; Neely & Adams 2001).

The Construction Industry Institute’s (CII) 10–10 Performance Assessment Program is a performance measurement system that was originally developed in the United States in 2012 in order to help managers of construction projects improve and better monitor the success of their projects. The 10–10 system works by creating 10 leading indicators (input measures) and 10 lagging indicators (output measures) based on anonymous responses to surveys sent out to project management team members. The underlying idea is that poor performance in the input measures as compared to industry benchmarks created by the aggregate scores of the entered projects is cause for concern where corrective action needs to be taken; otherwise the output measures and ultimately project performance will suffer as a consequence. The ten input measures are created from the scores of a series of approximately fifty questions given in the survey asking the project team members about various aspects of the project. The exact weighting function and which questions are mapped to which input measure is decided by the CII administrators based on their extensive industry experience. The ten measures are: planning, organizing, leading, controlling, design efficiency, human resources, quality, supply chain, safety, and sustainability. The questions that form the basis for the input measures are typically more subjective in nature and include relationship based or interpretive questions in contrast to the output measures that are objective measures such as time delay or cost overrun.

One of the challenges when implementing a performance measurement system in the construction industry is that is very much project-oriented, where although projects may involve similar sets of processes, each one could be considered a prototype with many distinct features (Wegelius-Lehtonen 2001). Thus, creating a comprehensive performance measurement system that is effective across many types of construction projects can prove to be a difficult task. The 10–10 system offers a possible remedy by tracking projects according to phase instead of just at completion. The advantage in doing this is that it is more readily possible to proactively improve project performance as warning signs can be systematically identified and acted upon. This is an evolution of the previous performance assessment system (PAS) designed by CII that only provided feedback at project closeout and was much more retrospective in nature. The five phases making up the 10–10 system in order from earliest to latest are: front end planning, engineering, procurement, construction, and start-up. Additionally, separation by phase helps to reduce variability in the analysed projects, as there are fewer factors to control for as well as offering possible analysis at a phase level.

The aim of this paper is to test the 10–10 system by conducting an initial empirical analysis on Norwegian construction projects that have been entered into the database thus far. A variety of relations between the input measures, output measures, project characteristics and key survey questions have been briefly discussed, as well as applicability to organizational improvement efforts and broader industry analysis needs. The long-term objective for the research project is to extract recommendations to be used as the basis for a set of tools for performance measurement in the construction industry. The 10–10 system was tested to see its capacity to meet both process- and project-level improvement efforts and to identify performance drivers and their effects, as well as to see how project-level data can be exploited at an aggregate measurement level. More explanation about the distinction between these levels of measurement and performance measurement follows in the next section.

Theoretical framework

What exactly is a performance measurement system, and how should an organization go about selecting one that fits its needs best? Neely, Gregory and Platts (1995) describe a performance measurement system as “the set of metrics used to quantify both the efficiency and effectiveness of actions.” This is a very broad definition and begs many questions such as which actions should be chosen to be measured, on what level the measurement should take place and what type of measurements are best suited to accomplish the task. Takim, Akintoye and Kelly (2003), for instance, classify performance measurement along three general dimensions that include quantitative or objective measures, qualitative or subjective measures, and what or whose performance is being measured. Performance measurement is fundamentally about identifying areas for business improvement, which can be done in a variety of ways.

When it comes to the type of measures that are to be used, there is a strong consensus that a balanced system needs to be employed that takes into account both quantitative and qualitative measures (Anderson & McAdam 2004; Atkinson, Waterhouse & Wells 1997). These measures are dependent in large part on what the organization hopes to get out of the system. Furthermore, a performance measurement system is not necessarily effective just by including both types of measurement. It is crucial to be able to see that there is a cause-and-effect relationship between non-financial and financial indicators that drives such improvement in performance and is ultimately tied to the strategic goals of the organization (Robinson et al. 2005; Kaplan & Norton 1996). Effectively utilizing both types of measurement and linking them to the organizational goals is a key component in having a comprehensive performance measurement system.

In terms of levels of measurement, there are two broad areas to consider: measuring on an activity level versus measuring on an aggregate level. Measuring on an activity level deals with improvement at a business process or project level, whereas measuring on an aggregate level is more about examining and comparing performance at a higher national or sector level. Productivity is currently one of the more commonly used measurements for performance and is one of the primary measurement methods used on an aggregate level.

One of the main concerns when using aggregate-level measurements is that they do not translate very well into understanding the level of performance for individual companies. Goodrum, Haas and Glover (2002), for instance, discuss the discrepancy between aggregate- and activity-level productivity estimates in the U.S. construction industry from 1976 to 1998. Even though productivity measurements showed a decline on the aggregate level over this time period, activity-level measurement data compiled from 200 construction activities over the same time period revealed the opposite. This difference shows that aggregate-level measurements may not be a reliable indicator of actual performance in the industry. At the same time, it is important for policymakers to have aggregate-level measurements in order to be able to get a broad overview for how the industry is doing, as the construction sector forms the core of a nation’s wealth (Muya et al. 2013). However, because of the difficulty in being able to connect these broad aggregate measurements to a more operational level, companies have looked elsewhere for measurements that are more applicable to their own performance (Harrison 2007).

Activity-level measurements are much more relevant for companies in this regard, as direct improvements can be implemented more readily as a result of such measurements. Activity-level measurements can be further broken down based on the level they target. According to Yang et al. (2010), a review of performance measurement studies in the construction industry from 1998 to 2009 found that there are three distinct levels being discussed: the project level, the organizational level and the stakeholder level. In the construction sector, the emphasis has been on performance measurement on the project level because of the nature of the work. Originally, performance measurement in construction was primarily about project performance in terms of time, cost and quality (Ward, Curtis & Chapman 1991). Lin and Shen (2007) reviewed performance measurement studies in the construction industry from 1998 to 2004 and found that 68% of the papers looked at performance measurement at the project level. Furthermore, the dimensions of measurement have also expanded to include softer areas such as the environment, health, safety, customer satisfaction, human resources, technological innovation, and so forth. Nevertheless, this emphasis on the project level brings up a few concerns; namely, that its focus is possibly narrow with respect to examining harder issues or quantifiable measures; retrospective in looking back at previous performance; and bottom-line driven or overly focused on short-term gains (Love & Holt 2000). Pillai, Joshi and Rao (2002), for instance, suggest an integrated performance measurement system that links performance metrics across the project selection, execution and implementation phases, rather than just examining the phases in isolation, to combat such a narrow focus. In this manner, resources will be more effectively allocated, and projects will better be able to meet organizational goals by utilizing a more holistic performance measurement approach.

More recently, there has been a shift to more organizational-level measurement as companies try to address the need for more aggregate-level measures (Bassioni, Price & Hassan 2005). Measuring on an organizational level gives the advantage of offering a better picture of how a company is positioned in the marketplace and whether they have a competitive advantage in certain aspects of their business that may be difficult to evaluate when measuring on a purely project level. Stakeholder-level measurement, on the other hand, is about judging performance from different stakeholder perspectives, which is also important to take into account. Wang and Huang (2006), for example, found that there is a significant relationship between the owner’s, supervisor’s and contractor’s performance and criteria of overall project success. A comprehensive performance system should be able to address the need for both activity and aggregate-level measurements. Additionally, taking into account various types of activity measurements helps to create a more robust framework for performance measurement.

Research methods

The research conducted in this paper has been carried out as a result of a formal search conference in the Norwegian industry to find ways to improve the performance of the construction sector. The research project began in 2013 and currently has funding until 2017 by the Norwegian Directorate for Building Quality (DiBK). The first stage of the project involved identifying the challenges involved in implementing performance measurement systems in the construction industry and how to effectively make use of data collected in order to make continuous improvement in the sector (Andersen & Langlo 2016). This laid the foundation for the second part of the study, which involved selecting a specific performance measurement system to test in order to see how the tool worked in practice and to what degree the system lent itself to actually creating continuous improvement. CII’s 10–10 system was chosen as the best possible candidate because of its likelihood of meeting the most requirements decided upon in the first stage of the study.

Data collection was based on projects that were input in the 10–10 database within Norway. At the time the analysis was carried out, a total of 45 projects had been entered into the database across 14 invited Norwegian companies (more companies and projects have been entered since). Individual files were extracted from the CII database containing the survey results for each of the Norwegian projects. The data were then entered into SPSS in order to generate descriptive statistics and run bivariate correlation analysis between input measures, output measures, project characteristics and key questions in the survey, in line with guidelines prescribed by Blumberg, Cooper and Schindler (2014). Because of the quantity of data and the large number of potential analysis angles, an initial discussion was conducted to narrow down the research to look at findings which would prove to be most valuable to the companies involved and management in charge of the project.

One limitation was the relatively low number of projects in the database at the time the study took place. This made it difficult to carry out statistically significant analysis between the phases as well as some of the individual survey questions or project characteristics, as they were only a part of certain phases. Furthermore, of the 45 projects entered into the database, not all of the surveys had a complete list of 10 output measures, making it also difficult to analyse the usefulness of some of the output measures. A possible explanation could be that the project management team did not have the necessary data when they filled out the survey or that the measure was not applicable to that project. For example, even though survey data from 45 projects were pulled, only 36 of the projects had valid cost overrun estimates (as can be seen in table 1). As more projects are entered into the database, this limitation will naturally diminish.

Results and discussion

This chapter identifies some of the key findings from the analysis of the 10–10 data. It was important to see whether poor performance in the input measures resulted in decreased project performance, as gauged by the output measures, using actual project data from the 10–10 system. As previously discussed in chapter 1, the input measures comprised 10 indicators that were created from a series of 50 or so questions asked to project team members about various aspects of the project after five possible phases. These questions were characterized by their more subjective nature and looked at areas such as how the team members felt about the different relationships and experiences during the project. The input measures could therefore be considered as soft measures. Conversely, the output measures were based solely upon numerical metrics such as cost overrun or time delay and could be considered hard measures.

Comparing the input measures to select output measures, we found that 8 of the 10 input measures had significant negative correlations with the output measure of cost overrun. Namely, the planning, organizing, leading, controlling, design efficiency, human resources (HR), quality and supply chain input measures all could be used as indicators of cost overrun in the project. Conversely, the sustainability and safety measures had no significant relation with cost overrun. The results of the bivariate correlations between the input measures and cost overrun can be observed in table 1.

Table 1 Correlations between cost overrun and input measures
Planning Organizing Leading Controlling Design Efficiency HR Quality Sustainability Supply Chain Safety
Cost Pearson Correlation –0.575** –0.590** –0.569** –0.485** –0.412* –0.597** –0.565** –0.131 –0.469** –0.246
Sig. (2-tailed) 0.000 0.000 0.000 0.003 0.013 0.000 0.000 0.446 0.004 0.148
N 36 36 36 36 36 36 36 36 36 36
** Correlation is significant at the 0.01 level (2-tailed).
* Correlation is significant at the 0.05 level (2-tailed).

The findings in this case can be applied on an individual project level as well as on the overall industry level. On a project level, the results can be used to focus on aspects of the project, as outlined by the eight input measures, in order to reduce cost overrun. More precisely, if the project is already exhibiting high cost overrun in the early phases, it would be possible to use the survey questions that were mapped to the eight input measures to take corrective action to reduce, hopefully, such a cost overrun. Preventive measures could also be employed based on these relations.

On an industry level, the findings can be utilized to understand what the basis for cost overrun in a project is and what it is not in terms of the given input measures. In this sense, we can see that cost overrun is not a fitting output measure in regards to safety or sustainability aspects, as the cause-and-effect relation is lacking, reflected by the non-significant correlations. If the industry wishes to make improvements in safety or sustainability, other outcome measures may be needed that are better suited to these areas, for example.

It was also quite valuable to look at what information could be gleaned from the individual survey questions that were used to create the input measures. A selection of the analysed survey questions is given below. Note that in this case all these survey questions came from the engineering/design phase, as that was where the majority of the inputted projects were from. Respondents could answer with a sliding-scale response of Strongly Agree, Agree, Neutral, Disagree, or Strongly Disagree, which were assigned integer values from 5 to 0 and then averaged over all the surveyed project team members who chose to answer the given survey question. The scores for the individual survey questions were then used as basis to compute the overall input measures scores.

Survey Question 16: The owner level of involvement was appropriate.

Survey Question 19: The project objectives and priorities were clearly defined.

Survey Question 26: The project team, including project manager(s), had skills and experiences with similar projects/processes.

Survey Question 27: People on this project worked effectively as a team.

Survey Question 28: The project experienced a minimum number of project management team personnel changes.

Survey Question 30: The interfaces between project stakeholders were well managed.

Survey Question 31: Key project team members understood the owner’s goals and objectives of this project.

Survey Question 34: Leadership effectively communicated business objectives, priorities and project goals.

Survey Question 36: Project leaders were open to hearing “bad news,” and they wanted input from project team members.

Survey Question 39: A high degree of trust, respect and transparency existed amongst companies working on this project.

Survey Question 46: The Design phase deliverables received from consulting engineers or other architects were complete and accurate.

Survey Question 48: A dedicated process was used to proactively manage change on this project.

Survey Question 52: The customer was satisfied with the Design phase deliverables.

It is evident that a wide range of areas are covered in these survey questions, ranging from customer satisfaction and stakeholder management to project team competence and the relationships between people working on the project. This is intended given that the 10–10 system is meant to function as a comprehensive performance measurement system, and a wide variety of issues need to be captured in these survey questions that make up the input measures. The results of a bivariate correlation between the survey questions and the individual input measures can be seen in table 2. The following scheme has been denoted:

Table 2 Correlations between survey questions and input measures
Survey Question Planning Organizing Leading Controlling Design Efficiency HR Resources Quality Sustain. Supply Chain Safety Total
16-Owner involved X X X S S S S M S 9
19-Objctve defined X X M M 4
26-Skill/experience S X S X S X S S S 9
27-Effective team M S X X S S S M 8
28-Team changes S X X X S X S M M 9
30-Stkhldr manage S X X S S S S S S 9
31-Owner understd S X S S S S X S 8
34-Effec. leaders M S X S M S 6
36-Open leaders S S X S S S S M S 9
39-Project trust S S X S S S S X S 9
46-Design delivery M X X X M 5
48-Change process S S X M 4
52-Cust. satisfic. S S S X S S S S S 9
  • An X indicates a superfluous correlation as the question has been used in creating the respective input measure (note that all superfluous correlations were found to be positive and significant, with either moderate or strong strength).
  • An S indicates a significant correlation with positive, strong strength.
  • An M indicates a significant correlation with positive, moderate strength.
  • An empty cell indicates no significant correlation.

It was interesting to see that many of the survey questions could be used as indicators of the scores for input measures that they had not been originally intended for, as observed by the non-superfluous significant correlations. This indicates that many of the questions have farther-reaching implications than even the system itself is predicting. Comparing these questions then to the output measures, we found that 7 of the 13 questions could then be used as key indicators when predicting cost overrun; namely, survey questions 16, 26, 28, 30, 36, 39 and 52. The results of the bivariate correlations between the survey question scores and cost overrun can be observed in table 3. All seven of the survey questions exhibited significant negative correlations with cost overrun. They were also the questions that had the highest number of significant correlations, both superfluous and non-superfluous, with the input measures, as can be seen by the shaded cells in the farthest right column of table 2.

Table 3 Correlations between cost overrun and survey questions
Q16 Q19 Q26 Q27 Q28 Q30 Q31 Q34 Q36 Q39 Q46 Q48 Q52
Cost Pearson Correlation –0.583** –0.146 –0.551* –0.354 –0.463* –0.602** –0.360 –0.168 –0.503* –0.520* –0.094 –0.342 –0.765**
Sig. (2-tailed) 0.007 0.540 0.012 0.125 0.040 0.005 0.119 0.479 0.024 0.019 0.772 0.140 0.000
N 20 20 20 20 20 20 20 20 20 20 12 20 20
**. Correlation is significant at the 0.01 level (2-tailed).
*. Correlation is significant at the 0.05 level (2-tailed).

Some of the results are expected, such as that customer satisfaction could be used as a forecaster of cost overrun given that it is a critical success criterion itself (Sanvido et al. 1992); or that the emotional intelligence of the project manager may play a significant role in project success (Rezvani et al. 2016). Although many of the survey questions deal with areas that would seem quite logically connected to project success, it is very valuable for organizations to see exactly what performance drivers are linked to what output measures based on empirical data. Through a mapping file provided by CII, it is possible for the organizations to see the exact questions that are used to form the basis for the input measure scores. Furthermore, it appears that many of the survey questions could have additional significant relations with other input measures not explicitly found in the mapping file. In this manner, it is possible for organizations to focus their efforts on key survey questions that are tied to many performance drivers, as well as to get a broad overview by viewing the aggregate benchmarked scores for both the input and output measure scores.

There is also potential to use the 10–10 system in order to understand more veiled obstacles. For example, perhaps an organization is weighing the worthiness of completing a life-cycle analysis for a project. Besides the obvious economic aspects, such as seeing whether there is an allocated budget for it or if there are contractual requirements to do it, there may be value in examining the 10–10 system for similar areas of overlap. Because of the extensive breadth of the 10–10 questionnaire and range of the input measures, quite a lot of aspects of projects are captured. In this case, the organization could examine various relations with the sustainability input measure in more detail by examining specific survey questions given that the organization has entered enough projects in the database to be able to conduct such analysis. In this case, one of the relevant questions is survey question 10 given below (Construction Industry Institute 2015).

Survey question 10: Was a life-cycle cost analysis completed for this project?

  • Yes
  • No

The results of the bivariate correlations between the input measures and survey question 10 responses can be observed in table 4.

Our analysis on an aggregate level reveals that, as expected, the Yes and No responses exhibit significant positive and negative correlations with the sustainability input measure, respectively, as the question was used to form the score for that measure given in the mapping file. Unexpectedly, however, we see that there is a significant negative correlation with the leading measure, and the Yes response, indicating that projects that completed a life-cycle cost analysis had lower scores in this measure than those that did not. A possible explanation could be that projects that needed to complete a life-cycle cost analysis required a larger number of stakeholders and thus had more conflicts among leadership. This type of information could be leveraged to implement additional conflict management training or to make sure roles and responsibilities are more clearly defined in the project team in the front-end phase when conducting a life-cycle cost analysis.

Table 4 Correlations between survey question 10 responses and input measures
Planning Organizing Leading Controlling Design Efficiency HR Quality Sustainability Supply Chain Safety
Q10_Yes Pearson Correlation –0.259 –0.362 –0.442* –0.404 –0.258 –0.223 –0.297 0.495* –0.265 –0.112
Sig. (2-tailed) 0.256 0.107 0.045 0.070 0.259 0.332 0.192 0.023 0.246 0.628
N 21 21 21 21 21 21 21 21 21 21
Q10_No Pearson Correlation 0.020 0.031 0.185 0.151 –0.017 0.182 0.144 –0.571** 0.090 –0.165
Sig. (2-tailed) 0.931 0.893 0.422 0.515 0.940 0.430 0.532 0.007 0.698 0.475
N 21 21 21 21 21 21 21 21 21 21
*. Correlation is significant at the 0.05 level (2-tailed).
**. Correlation is significant at the 0.01 level (2-tailed).

Variations as a result of project characteristics were also analysed. Differences were looked at in regards to areas such as project contract type, delivery method and nature, among others. Although no statistically significant findings were observed most likely due to the small sample size, the capability to see the difference in the output measures as a function of the characteristics is valuable in customizing action plans to fit different possible projects by anticipating the possible problem areas. For example, in table 5 the time delay between lump sum and cost plus contracts can be observed.

Table 5 Lump sum and cost plus contract time delay statistics
N Minimum Maximum Mean Standard Deviation
Lump Sum 16 –0.66 0.12 –0.1095 0.20919
Cost Plus 9 –0.36 0.29 0.0184 0.16819

The results show that the projects utilizing a lump sum contract were on average 11% ahead of schedule, whereas those utilizing a cost plus contract were on average 2% behind schedule. The findings can be partially explained by incentives offered to the contractor. In lump sum contracts, for instance, there is a fixed payment contractors receive to complete the project, and thus it is to their benefit to finish as soon as possible to maximize their profit. In a cost plus contract, contractors may not have the same incentives to finish on time given that they are paid based on materials used and hours worked, which may explain the discrepancy between the two. If an organization therefore carries out a project using a cost plus over a lump sum contract, additional time buffers may be needed in schedule planning because of the greater likelihood of increased time delay in the project. It is important to note that an independent samples t-test gave a p-value of 0.131, indicating that these findings are not statistically significant. However, with a larger sample size we would expect that there would be a strong likelihood that the overall findings would still hold true.

These are just a few examples of how the 10–10 data can be effectively used by organizations. Based on a client’s specific desires, a high degree of customization is available due to the breadth of the survey characteristics, which allows for analysis on multiple measurement levels. What is fundamentally quite valuable from the 10–10 system is that the exact areas of concern can be examined through the input measures or survey questions and the corresponding relations to the output measures, thus giving concrete areas for improving plans for a project. Furthermore, aggregate benchmarking statistics are also collected, which enable performance measurement at both organization and national/sector levels.

Conclusion

This study was an initial attempt to see how useful the 10–10 system is in practice, particularly for the Norwegian construction industry as the analysis was based exclusively on projects done in Norway. Additionally, access to the empirical data of multiple companies in the 10–10 system allowed for valuable insights and exploitation of project-level data for broader industry analysis. Some of the findings in this sense are relatively novel as such analysis has not been possible before due to the scarcity in acquiring such detailed aggregate data. The results show that the 10–10 system offers a high potential for improving project performance as well as conducting broader organizational and national analysis. A select few findings have been discussed in this paper that gives a brief glimpse into how the various aspects of the survey can be exploited for performance measurement improvement efforts.

Many key observations were discovered on the aggregate level that could be applied to individual project teams to help them perform better as well as assist management in taking corrective or preventive action in an effort to promote continuous improvement. Even though we were only able to find significant results concerning the cost overrun measure, this could be attributed to the relatively small sample size and the possible variations between the measures from the phases or project areas. Also, because of the time constraints and large number of analysis angles, not all output measures were examined to the same degree, and some significant findings may have gone undiscovered.

One relatively new idea that has been introduced in this paper is utilizing the same system for project and organizational-level performance measures to examine broader industry trends. The value in this is that there is a clear connection between the levels of measurements built into the system. This is often contrary to many other systems that may focus on measuring on only one level, forcing organizations to utilize multiple measurement systems to meet the demands of all the stakeholders involved. In this manner, the 10–10 system circumvents needing to utilize such external productivity statistics as all projects entered into the database are used to create aggregate statistics for industry benchmarking.

Future research could include further analysing variations in input measures, output measures, survey questions and project characteristics that were not examined because of the lack of significance as a result of the small pool of projects in the database at the time the study took place. Potential areas include examining variations in project location, phase, type (infrastructure, industrial or building) and nature, among others. Additionally, if companies employing 10–10 used the system over a longer period of time and input a significant number of projects into the database, it would also be possible to undertake an analysis of the projects within the company itself. In this manner, significant cause-and-effect relationships could be identified unique to the company, providing a more tailored improvement plan.

As the 10–10 database expands to include more projects, the reliability and validity of the results will only increase. Without a doubt, additional significant findings will also be discovered that will prove invaluable for organizations employing the 10–10 system to more effectively monitor and improve project performance, as well as key policymakers to monitor the overall performance of the construction sector.

References

Andersen, B. & Langlo, J.A. 2016, Productivity and performance measurement in the construction sector’, CIB World Building Congress 2016, Tampere University of Technology, Tampere, Finland.

Anderson, K. & McAdam, R. 2004, ‘A critique of benchmarking and performance measurement: lead or lag?’, Benchmarking: An International Journal, vol. 11, pp. 465–83. https://doi.org/10.1108/14635770410557708

Atkinson, A.A., Waterhouse, J.H. & Wells, R.B. 1997, ‘A stakeholder approach to strategic performance measurement’, MIT Sloan Management Review, vol. 38, p. 25.

Bassioni, H.A., Price, A.D. & Hassan, T. M. 2004, ‘Performance measurement in construction’, Journal of Management in Engineering, vol. 20, pp. 42–50. https://doi.org/10.1061/(ASCE)0742-597X(2004)20:2(42)

Bassioni, H.A., Price, A.D. & Hassan, T.M. 2005, ‘Building a conceptual framework for measuring business performance in construction: an empirical evaluation’,, Construction Management and Economics, vol. 23, 495–507. https://doi.org/10.1080/0144619042000301401

Blumberg, B.F., Cooper, D.R. & Schindler, P.S. 2014, Business research methods, McGraw-Hill Education, New York.

Construction Industry Institute 2015, 10–10 Questionnaires, University of Texas at Austin, viewed 10 January 2017. https://wikis.utexas.edu/display/CII1010/10-10+Questionnaires

Goodrum, P.M., Haas, C.T. & Glover, R.W. 2002, ‘The divergence in aggregate and activity estimates of US construction productivity’, Construction Management & Economics, vol. 20, pp. 415–23. https://doi.org/10.1080/01446190210145868

Harrison, P. 2007, ‘Can measurement error explain the weakness of productivity growth in the Canadian construction industry?’, Centre for the Study of Living Standards, Ontario.

Kaplan, R.S. & Norton, D.P. 1996, The balanced scorecard: translating strategy into action, Harvard Business Press, Cambridge, MA.

Lin, G. & Shen, Q. 2007, Measuring the performance of value management studies in construction: critical review, Journal of Management in Engineering, vol. 23, pp. 2–9. https://doi.org/10.1061/(ASCE)0742-597X(2007)23:1(2)

Love, P.E. & Holt, G.D. 2000, Construction business performance measurement: the SPM alternative, Business process management journal, vol. 6, pp. 408–16. https://doi.org/10.1108/14637150010352417

Muya, M., Kaliba, C., Sichombo, B. & Shakantu, W. 2013, ‘Cost escalation, schedule overruns and quality shortfalls on construction projects: the case of Zambia’, International Journal of Construction Management, vol. 13, pp. 53–68. https://doi.org/10.1080/15623599.2013.10773205

Neely, A. & Adams, C. 2001, ‘The performance prism perspective’, Journal of Cost Management, vol. 15, 7–15.

Neely, A., Gregory, M. & Platts, K. 1995, ‘Performance measurement system design: a literature review and research agenda’, International Journal of Operations & Production Management, vol. 15, 80–116. https://doi.org/10.1108/01443579510083622

Pillai, A. S., Joshi, A. & Rao, K.S. 2002, ‘Performance measurement of R&D projects in a multi-project, concurrent engineering environment’, International Journal of Project Management, vol. 20, pp. 165–77. https://doi.org/10.1016/S0263-7863(00)00056-9

Rezvani, A., Chang, A., Wiewiora, A., Ashkanasy, N.M., Jordan, P.J. & Zolin, R. 2016, ‘Manager emotional intelligence and project success: the mediating role of job satisfaction and trust’, International Journal of Project Management, vol. 34, no. 7: pp. 1112–22. https://doi.org/10.1016/j.ijproman.2016.05.012

Robinson, H.S., Anumba, C.J., Carrillo, P.M. & Al-Ghassani, A.M. 2005, ‘Business performance measurement practices in construction engineering organisations’, Measuring Business Excellence, vol. 9, pp. 13–22. https://doi.org/10.1108/13683040510588800

Sanvido, V., Grobler, F., Parfitt, K., Guvenis, M. & Coyle, M. 1992, ‘Critical success factors for construction projects’, Journal of Construction Engineering and Management, vol. 118, pp. 94–111. https://doi.org/10.1061/(ASCE)0733-9364(1992)118:1(94)

Takim, R., Akintoye, A. & Kelly, J. 2003, ‘Performance measurement systems in construction’, in D.J. Greenwood (ed.), 19th Annual ARCOM Conference, 3–5 September 2003, University of Brighton, Association of Researchers in Construction Management, Vol. 1, pp. 423–32.

Wang, X. & Huang, J. 2006, ‘The relationships between key stakeholders’ project performance and project success: perceptions of Chinese construction supervising engineers’, International Journal of Project Management, vol. 24, pp. 253–60. https://doi.org/10.1016/j.ijproman.2005.11.006

Ward, S., Curtis, B. & Chapman, C. 1991, ‘Objectives and performance in construction projects’, Construction Management and Economics, vol. 9, pp. 343–53. https://doi.org/10.1080/01446199100000027

Wegelius-Lehtonen, T. 2001, ‘Performance measurement in construction logistics’, International Journal of Production Economics, vol. 69, pp. 107–16. https://doi.org/10.1016/S0925-5273(00)00034-7

Yang, H., Yeung, J.F., Chan, A.P., Chiang, Y. & Chan, D.W. 2010, ‘A critical review of performance measurement in construction’, Journal of Facilities Management, vol. 8, pp. 269–84. https://doi.org/10.1108/14725961011078981

About the Authors

Kimiyoshi Oshikoji has a MSc in project management with specialization in production and quality engineering from the Norwegian University of Science and Technology. He has a previous M.Sc. in management from the University of Waterloo and a B.Sc. in electrical engineering from the University of California, San Diego. He has former experience working for IBM in a project manager role and is currently working as a business analyst in the financial sector.

Bjørn Andersen is a professor of quality and project management at the Norwegian University of Science and Technology. He has authored/co-authored around 20 books and numerous papers for international journals/conferences. He has managed/been involved in several national/international research projects. He serves as Director of Project Norway, is an Academic in the International Academy of Quality, is co-editor of the International Journal of Production Planning & Control, and directs the NTNU master program in mechanical engineering.

Author(s): 
Kimiyoshi Oshikoji, Bjørn Andersen