Monitoring Customer Perceived Service Quality and Satisfaction during the Construction Process

Perry Forsythe

School of the Built Environment, University of Technology Sydney, Australia

Abstract

Service quality has been studied across many construction related disciplines but little has been done concerning how it effects customer satisfaction during the day-to-day dynamics of onsite construction services. The research explores this setting in Australian housing construction projects. A highly detailed single case study methodology was used with a view to facilitating theory development for a targeted customer type displaying service quality oriented expectations, high involvement, but low construction experience. Gaps scores for perceived service quality and customer satisfaction were systematically monitored during construction. Concurrently, interviews were used to obtain incident data linked to the scoring data. It was found that service incidents, service quality and customer satisfaction were linked at each stage of construction. Related aspects included the ratio between positive and negative incidents; a saturation point regarding negative incidents; and an end of process/product realisation factor. The importance of identifying active service quality dimensions during construction was identified (especially reliability and care in execution of work). An incident coding structure was developed whereby frequently recurring incident features included spontaneous situations, site observations, personal interaction, subcontractor involvement, progressive product quality, progressive construction activity and defensive customer action. The research recommends that construction contractors aim to control the above features by creating orchestrated incidents and controlling exposure to perceptions via fast and seamless onsite construction.

Keywords: Customer satisfaction, customer behaviour, house building, service quality, construction process

Paper type: Research article

Copyright: Construction Economics and Building 2015. © 2015 Perry Forsythe. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 Unported (CC BY 4.0) License (https://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Forsythe, P., 2015. Monitoring Customer Perceived Service Quality and Satisfaction during the Construction Process. Construction Economics and Building, 15(1), 19-42. DOI: http://dx.doi.org/10.5130/ajceb.v15i1.4172

Corresponding author: Perry Forsythe; Email - Perry.Forsythe@uts.edu.au

Publisher: University of Technology Sydney (UTS) ePress

Introduction

Quality has been at the heart of debate about business improvement for the last 80 years, including reference points such as Shewhart (1931), Feigenbaum (1983), Crosby (1984), Juran (1988), Deming (1982), Taguchi (1986), Womack, Jones and Roos (2008), ISO 9000 (2008) and the implementation of Six Sigma (Walter, 2005). Over this period, the tone of the debate has gradually shifted from a predominantly supply-side perspective of quality, to one that increasingly aims to deliver customer value and satisfaction. This is largely because quality is seen as a significant contributor to business success by increasing market share, profits and customer equity (Kotler and Armstrong, 2013; Victor and Boynton, 1998). The marketing discipline has especially focused on service quality because of the increasing scale and importance of the service economy (Cravens, et al., 1988; Wilson, et al., 2012). Value is now seen by customers in terms of what they get out of services, and not just what goes in – hence the importance of understanding customer perceptions (Vandermerwe, 1994). Vargo and Lusch (2004) have even proposed a service-dominant logic whereby goods have simply become a vehicle for service delivery, targeted to meet customer-specific needs.

The focus on customer perceptions stems from behaviour theory as distinct from economic theory (Engel, Blackwell and Miniard, 1993). For instance, transaction economics, principal-agent theory and information asymmetry all revolve around explaining the structured economic interplay between the parties directly involved in market-based transactions (Jäger, 2008; Jensen and Meckling, 1979). Conversely, the customer-driven approach focuses more on the emotive drivers and modes of decision-making that specifically influence individual customer perceptions. Consequently, methodological issues are dealt with via behaviour-based theories that try to understand how customers make decisions in certain buying situations and how these decisions are later evaluated (Foxall, 1990). Comprehensive theories of customer behaviour began to evolve in the late 1960s, with the Nicosia (1966) model, the Howard and Sheth (1969) model, the Engel, Kollat and Blackwell (1968) model and Bettman’s (1979) information-processing model of consumer choice. They provide an integrated view of customer behaviour, identify components in purchase decision-making, and offer a framework for testing and developing theory (Bednall and Kanuk, 1997).

Much has been written on the conceptual development of customer-perceived service quality, as well as its application to the construction industry, including distinct areas of service provision such as building designers (Bubshait, et al., 1999; Day and Barksdale Jr, 1992; Hoxley, 2000; Love, et al., 2000), consulting services (Hoxley, 1994; Hoxley, 2000; Samson and Parker, 1994), quantity surveying (Procter and Rwelamila, 1999) and building maintenance (Siu, et al. 2001). Others have examined broader-based conceptualizations concerning the integration of service quality into construction management models (Winch, et al., 1998) or considered service quality in the context of public interest procurement strategies (Ling and Chong, 2005; Tranfield, et al. 2005).

There has also been considerable research into the related area of customer satisfaction, beginning with authors such as Cardozo (1965) and Oliver (1977; 1980a). From this, satisfaction is commonly described as being a comparison between customers’ pre-purchase expectations and their post-purchase perceptions (Oliver, 1993). Although this provides a simple paradigm of how customers evaluate their level of satisfaction, the inherent individuality of such evaluations makes it hard to generalize findings to broader populations. Therefore, it is important to focus on specific customer characteristics and specific market settings to make sense of the paradigm.

While customer satisfaction and service quality are not new to the construction management literature, the day-to-day dynamics of onsite construction services have yet to receive much attention. This is therefore the focus of the research reported in this paper. The research examined the Australian housing construction sector and the impact of service quality on customer-perceived satisfaction. As argued below, these customers deserve individual study since customer behaviour differs from one market setting to the next (Seth, et al., 2005). Customers in this market may be particularly sensitive to service quality because of their low experience with construction but their high level of involvement in observing the process.

Housing Construction Customers and the Market Context in Australia

A conundrum for the housing construction industry in Australia is that the approaches taken to quality differ somewhat from the classical approaches used in manufacturing production processes. For instance, manufacturing typically occurs on a mass scale and without much direct involvement from the customer. It focuses on controlling, measuring and managing the reliability of production processes and end-product compliance, striving for objectivity and homogeneity (Garvin, 1987; Garvin, 1983). While aspects of this approach are clearly apparent in Australian housing construction, customers are more directly involved in the delivery process. This typically begins with the design phase of the project, but this research focuses on the construction phase. Here, customers are potentially more aware of service quality because they are able to observe the service delivery (construction) as it happens dynamically onsite and what they see potentially pre-empts what they expect as the end product.

The majority of customers in Australian housing construction purchase for personal need, as 76% of tenures involve owner-occupied housing (Australian Bureau of Statistics, 2011). Despite a gradual move to higher-density housing, 75% of the stock still involves detached housing built primarily on separately owned allotments (Australian Bureau of Statistics, 2011). Unlike multi-unit housing where the building is usually built first and then sold to the customer, in the above scenario the customer is more directly involved in driving a relatively fragmented delivery process. Forsythe (2007a) outlined this process, which includes customers buying an allotment of land, choosing a standard design from a ‘volume builder’ or developing a design with an architectural firm, obtaining tender prices from contractors for the design(s) and contracting the construction to the chosen contractor. The contract for construction is typically with a single organization but the vast majority of the physical work is subcontracted by them to trade-specific service providers (Banks, et al., 2004; Joint Select Committee on the Quality of Buildings, 2002).

As few (if any) intermediary consultants are involved during the actual construction process, the customer’s service relationship is a relatively singular one with the contractor. Yet customers in this market only irregularly procure construction services (i.e. for personal need) and consequently tend to lack technical, contractual and managerial experience in construction (Joint Select Committee on the Quality of Buildings, 2002 p.1). They are likely to behave differently to commercial construction customers and must draw on their generalist consumer experience rather than a deep understanding of the construction process. Moreover, they are emotionally and highly involved in both the process and outcome as they work toward a house that reflects their personal needs and taste (Joint Select Committee on the Quality of Buildings, 2002).

The above commentary identifies the need for deeper study into the chosen customer type. Their dichotomous situation of potentially having high levels of emotional involvement and low levels of expertise means that these customers may be more aware of service quality as an issue that influences their satisfaction. This research therefore focuses on how such perceptions form during the daily dynamics of onsite construction, qualitatively exploring and understanding detailed patterns of individual customer behaviour. This differs from many of the previously reported studies that take a more quantitative approach to measuring general service quality attitudes. Instead, this study focuses on conceptual development, in order to facilitate a more theoretically driven agenda for ongoing customer research in construction. Ultimately it aims to help construction contractors be more aware and responsive to customer needs, prevent the fallout from disputes during construction, and reap the benefits that are achievable from customer satisfaction, including repeat business, word-of-mouth recommendation and customer loyalty (Bei and Chiao, 2001; Parasuraman, Zeithaml and Berry, 1994a; Sunindijo, Hadikusumo and Phangchunun, 2014).

The Conceptual Framework for Service Quality

Perceived service quality is often conceptualized as a type of subjective attitude whereby the more knowledge that a person has about something, the more likely they are to form positive or negative attitudes (Bednall and Kanuk 1997; Bolton and Drew 1991; Patterson and Johnson 1993; Zeithaml 1988).

Parasuraman, Zeithaml and Berry (1985) conceptualized service quality as a number of gaps in the service delivery process, which ultimately culminate in a critical gap between customer perceptions and expectations. They dimensionalized service quality into five components – reliability, responsiveness, assurance, empathy and tangibles – and operationalized the measurement of these components into the now well-known SERVQUAL survey instrument (Parasuraman, Zeithaml and Berry, 1988). The instrument includes 22 standardized statements distributed across the five dimensions, and utilizes a Likert scale that determines the gap by subtracting separately scored perception from expectation for each statement (Parasuraman, Zeithaml and Berry, 1988). Gap scores for each dimension are then averaged to provide an overall service quality score.

The validity of the above dimensionalization and its operationalization using SERVQUAL has been criticized by authors such as Carmen (1990), Babakus and Boller (1992), Brown, Churchill and Peter (1993) and Cronin and Taylor (1992; 1994). Some have also downplayed the relevance of expectations in the measurement of service quality. For instance, Cronin and Taylor (1994) developed an alternative approach to measurement (SERVPERF) based purely on customer perceptions of performance. In response, Parasuraman, Berry and Zeithaml (1991, 1993; 1994a; 1994b) undertook further testing of the five-dimensional construct and refined SERVQUAL to underpin its validity and operational relevance. This included statistical testing to determine the stability of the dimensions, simplification of the measurement of perceptions and expectations, and recommendations concerning allowable changes to the instrument that could be made to suit industry-specific applications.

Despite the criticism, SERVQUAL has retained longevity especially for the psychometric advantages it holds over other methods. In particular, it has better diagnostics abilities compared to the main competing instrument, SERVPERF (Jain and Gupta, 2004). This diagnostic ability is of central importance to the underlying objectives of this study, as it enables a detailed exploration of service quality perceptions during construction. Consequently, SERVQUAL was drawn upon as a base means for measuring perceived service quality in this study.

Refinement of Service Quality Measurement in Housing Construction

Specific applications of SERVQUAL within the construction discipline include a study by Samson and Parker (1994), who utilized SERVQUAL in the consulting engineering industry. Nelson and Nelson (1995) created RESERV, a modified version that added two dimensions in dealing with real estate brokerage, and Hoxley (1994) developed SURVEYQUAL for building surveying firms, which was later applied to construction professionals (Hoxley, 2000). Siu, Bridge and Skitmore (2001) and Lai and Pang (2010) also developed modified versions to assess building maintenance services.

More specific to the housing construction sector, Sunindijo, Hadikusumo and Phangchunan (2014) developed a four-dimensional version of SERVQUAL to assess the relationship between service quality perceptions and customers’ behavioural intentions in ongoing business relations with Thai housing contractors. Holm (2000a; 2000b) studied housing associations in Sweden and the service quality perceptions of apartment occupants with regard to refurbishment services.

These studies have all adapted or modified SERVQUAL to suit their individual market settings, but these versions have limited transferability as customers tend to change behaviour according to the market conditions they encounter (Seth, Deshmukh and Vrat, 2005). Therefore, with regard to Australian housing customers, Forsythe’s (2007b) adaptation of SERVQUAL – BUILDSERV – was considered particularly useful to this study (see Appendix A). It includes two additional dimensions that aim to tap into the physical nature of the work that customers will observe during onsite processes: physical work output (the progressive realization of the end product), and care in execution of work (the care taken by tradespeople during the production process). BUILDSERV merges expectations and perceptions into a singular measurement scale, in response to criticism of their separate measurement (Forsythe, 2007b).

Customer Satisfaction and Linkages to Service Quality

For many years, concepts of customer satisfaction have been structured around the way that expectations act as a comparison standard for making satisfaction evaluations. Conceptualizations of satisfaction differ from those discussed under the development of service quality and go as far back as the mid-1960s via authors such as Cardozo (1965). The disconfirmation of expectations paradigm was developed by Oliver (1980a; 1980b, 1993; 2006; 2010) and has dominated debate about satisfaction for much of the subsequent period (Magnini, et al., 2012; Patterson and Johnson, 1993; Spreng and Mackoy, 1996; Yüksel and Yüksel, 2001). Ostensibly, the model posits that satisfaction is based on a disconfirmation of pre-purchase expectations and post-purchase perceptions, being a comparative evaluation that influences the strength and direction of customer satisfaction/dissatisfaction (Oliver, 1993). The evaluation focuses on an affective or emotional response (Oliver, 1981).

The disconfirmation model and other conceptualizations of satisfaction was discussed by Yi (1990) and more recently Seth, Deshmukh and Vrat (2005). Yi’s (1990) well-structured critical analysis divided the debate about satisfaction into its definition, measurement, antecedents and consequences. Yi (1990) supported the view that satisfaction evaluations are based on a comparison standard whereby satisfaction is found to be a result of pre-experience comparison standards and disconfirmation. As his findings were consistent with Oliver’s model, credence is given to this model in this study.

For some time, customer satisfaction and service quality were studied purely as separate constructs, but there was an obvious need to ultimately determine the relationship between them (Bei and Chiao, 2001; Cronin Jr, et al., 2000). The importance of service quality as an input into customer satisfaction is supported by other authors including Cronin and Taylor (1992), Oliver (1993), Parasuraman, Zeithaml and Berry (1994a; 1994b), Patterson and Johnson (1993) and Spreng and Mackoy (1996). Even so, specifics relating to the construction industry still represent a relatively under-researched area. Past work by Oliver (1993; 2010) provides a degree of guidance in a proposed model that sees service quality as an antecedent to customer satisfaction. Further, he modelled customer satisfaction as being based on predictive expectations (or what customers expect will happen in a given purchasing situation), while service quality was based on customers’ idealized or desired expectations (or what customers expect should happen). Oliver’s model (1993) was statistically tested for structural validity by Spreng and Mackoy (1996), whose results confirmed that service quality is an antecedent to customer satisfaction as well as being strongly related to it. Though there is still ongoing debate about the finer points of how the two constructs are related, the use of different expectation standards was adopted when measuring customer satisfaction and service quality in this research (see Appendix A).

Customer Expectations in Housing Construction

As indicated above, expectations play an important role in predicting and influencing what customers will be attuned to in their satisfaction and service quality evaluations. Little conceptual development could be found specifically on expectations within the construction literature. The main exception is Forsythe’s (2012) study of the expectations of a sample of 52 Australian housing construction customers, which examined the process they went through when selecting a construction contractor. His modelling of behaviour patterns identified two contrasting customer types – where service quality was fully present or fully absent – in the choice of contractor. As this research focuses on service quality, it leveraged Forsythe’s (2012) research by utilizing customers in whom service quality was fully present, which includes the following traits:

Research Method

An in-depth case-study approach was used for gathering and analysing data. The emphasis was on a single case study in order to explore, identify and understand detailed patterns of individual behaviour for the targeted customer during construction (as distinct from broader-based sampling of known variables).

This approach is supported by Flyvberg’s (2006) debate on the virtues of case-study research and the shortcomings of the hypothetico-deductive approach. He built upon past debate to dispel misconceptions of case-study research and asserted that it is not only useful for generating and testing hypotheses, but also has potential to generalize findings from even a single case study. The strategic selection of cases is important for generalizability, particularly where at least part of the objective is to gain the greatest possible amount of information on a given phenomenon (Ragin, 1992). This approach is capable of unveiling more information and developing high-level, nuanced expertise in contrast to a hypothetico-deductive approach, which does not provide context-dependent knowledge and limits hypothesis testing to creating rules (Flyvbjerg, 2006). Flyvberg (2006) argued that it is often more important to clarify the deeper causes behind a given problem and its consequences than to purely describe the basic symptoms of the problem. These issues are particularly relevant in the current situation where little academic knowledge is currently held.

In this research, a mix of quantitative and qualitative approaches was used to help blend measurability with inductive learning about customer behaviour patterns (Yin, 2009). The case study focused on the customer as the primary unit of analysis and spanned the full duration of the customer’s involvement in the housing construction process. The head contractor and associated subcontractors represented the main service providers of interest.

In terms of quantitative data gathering, the customer’s perception of service quality was measured using the BUILDSERV instrument (see Appendix A). In order to monitor changes during the project, the survey was administered repeatedly at specific stages during construction, including completion of floor construction, superstructure, wall linings and practical completion. These stages coincided appropriately with progress payments made by the customer to the contractor, as the act of payment to some extent forces customers to evaluate their satisfaction/dissatisfaction with the service being provided. Customer satisfaction was also measured in order to check the impact of service quality upon it, including a direct question about satisfaction and a supporting question about whether or not the customer would recommend the builder to other people.

In terms of qualitative data gathering, face-to-face interviews were undertaken immediately after the BUILDSERV instrument was administered. This was done at each of the repetition points mentioned above (i.e. four separate interviews at four separate stages of construction). These interviews aimed to explore the service incidents and activities contributing to the BUILDSERV scores at each point in time.

The interview questions were driven by the scores from the completed BUILDSERV survey. For instance, customers were asked to explain in detail the events leading to their scores – especially those leading to distinctly positive or negative scores – and encouraged to recount episodes, incidents and general storyline of what had occurred. Each interview lasted between 50 and 90 minutes and the full interview set (across all four stages of the project) involved 350 minutes of interview data. Each interview was recorded, transcribed and then thematically analysed to structure, categorize and sequence the data (Boyatzis, 1998).

Thematic analysis is similar to content analysis but focuses on categorizing theme frequency rather than word frequency, which provides a more expressive tool for developing theory (Boyatzis, 1998; Holsti, 1969). Cues for the coding scheme in this research utilized both an a priori approach operating within concepts discussed earlier in the paper and a more inductive approach where new content arising from the interview process was apparent (Boyatzis, 1998, p.44).

Given the extent of its data gathering, the study did not aim to undertake a large-scale comparison of case studies, but rather to conceptualize detailed patterns of behaviour by a specific customer that could then be tested on broader-based customer groups. Based on Forsythe (2012), a customer with fully present service quality in pre-purchase expectations was selected and examined in the case study. The overall aim of the case-study approach was to capture the dynamics and changing ‘heartbeat’ of the project as seen through the eyes of such a customer.

While the face-to-face interview data aimed to be linked to scores from the BUILDSERV instrument, the way that customers recounted events was free flowing and subsequently needed a coding structure to determine patterns of behaviour. It was apparent from the interviews that the customer perceived service quality through an ever-evolving set of incidents that triggered positive and negative perceptions of service quality. Each incident was therefore inductively coded according to five recurring themes that were found to occur in each and every incident. These themes were prompted by Holsti’s (1969) account of the components required to define a theme: a perceiver, an agent of action, an action and a target of the action. An interpreted version of these features was developed to code both positive and negative incidents as follows:

Results

The data and related analysis are provided under headings that reflect the specified data gathering points. At each stage, a graph (see for example Figure 1) is used to present gap scores for customer satisfaction and service quality. Arrowheads show the current gap score and arrow tails show any movement in score since the last data gathering point. The bracketed section on the graph shows the period of incidents influencing gap movement.

417201.jpg
Figure 1: Gap movements in customer satisfaction and service quality at floor stage.

A second graph (see for example Figure 2) has similar presentation but breaks down the service quality gap into its seven individual dimensions as determined by the BUILDSERV instrument. Each is compared with the customer satisfaction gap (shown as a constant using a horizontal line) to explore which dimensions were most actively associated with customer satisfaction at a given point in time.

417202.jpg
Figure 2: Gap movements in service quality dimensions and customer satisfaction at floor stage.

A third graph (see for example Figure 3) presents the interview data according to the previously discussed coding system using a bar graph approach. The number of positive and negative incidents occurring during a given period is shown after the title. The frequency of individual features from these incidents is then expressed using bars to show the most prominent features characterizing both positive and negative incidents at each stage of the project. The presentation of the bars enables cumulative trends across the entire project to be analysed.

417203.jpg
Figure 3: Features of service incidents at floor stage (positives n= 5; negatives n=2)

Floor Stage

In Figure 1, it is apparent that at floor stage, customer satisfaction had a small positive gap (1.0 point) and service quality followed a similar trend, albeit marginally smaller in size (0.4 points). This suggests that the two were closely related at this point in time.

In breaking down the individual service quality dimensions, it can be seen from Figure 2 that some dimensions tracked customer satisfaction more actively than others. For instance, empathy, personal assurance, reliability and responsiveness were active in terms of moving in the same direction as customer satisfaction. In addition, reliability and responsiveness can be described as being more highly active in not only moving in the same direction but also tracking the customer satisfaction gap more closely than the other active dimensions.

Inactive dimensions are also apparent. For instance, care in execution, tangibles and work output remained neutral or moved in the opposite direction to customer satisfaction.

Based on the above, service quality can be recalculated by averaging gap scores of the four active dimensions (including acknowledgement of the highly active dimensions in this mix). This provides a revised score of 0.8, which is closer in association with the customer satisfaction gap (1.0) relative to the all-inclusive service quality gap (0.4).

During the period, seven incidents were instrumental in shaping perceptions: five were positive and two negative. The ratio between the two reflects a positive bias and potentially helps to explain the small but positive gap scores discussed previously. Detailed features of these incidents are shown in Figure 3, which covers the five aspects of the incident coding structure:

Superstructure Stage

Figure 4 shows that at superstructure stage, customer satisfaction dropped relative to floor stage to being neutral (0 points). Service quality moved a lesser amount but in the same direction (0.2). The two gaps were in very close proximity, suggesting a continued close association between customer satisfaction and service quality.

417204.jpg
Figure 4: Gap movements in customer satisfaction and service quality at superstructure stage

Figure 5 indicates that four of the seven dimensions were active in so far as moving in the same direction as the customer satisfaction gap – reliability, responsiveness, empathy and care in execution of work. Of these, reliability appears to be the most highly active in terms of tracking similarly to the customer satisfaction. In contrast, the remaining three dimensions – personal assurance, tangibles and work output – were inactive as they moved in the opposite direction to customer satisfaction.

417205.jpg
Figure 5: Gap movements in service quality dimensions and customer satisfaction at superstructure stage

As previously, the average of the four active dimensions was recalculated to a revised gap of 0.13 (including acknowledgement of the highly active dimensions in this mix), which was again closer to the customer satisfaction gap (0.0) than the all-inclusive service quality gap (0.2).

This stage was found to break down into 15 identifiable incidents – 8 positive and 7 negative. This mix represents a weaker positive bias relative to floor stage, but again the ratio between positive and negative incidents appears reasonably well aligned with the relatively neutral gap scores above. This shows a similar trend to the previous stage whereby the positive and negative incidents appear to counter-balance each other, reflecting the service quality and customer satisfaction gap scores.

In depicting positive and negative incidents, there were again standout features (Figure 6):

417206.jpg
Figure 6: Features of service incidents at superstructure stage (positives n= 8; negatives n=7)

Linings Stage

Figure 7 indicates that satisfaction had dropped significantly since superstructure stage (-2.0) and service quality followed a similar trend (-0.8). The two gaps again tracked quite closely to each other, following trends from earlier stages.

417207.jpg
Figure 7: Gap movements in customer satisfaction and service quality at linings stage

A breakdown of the seven individual dimensions of service quality (Figure 8) shows that all dimensions except tangibles were active in following the same negative direction as the customer satisfaction gap. Of these, reliability, empathy and care in execution of work appeared to be more highly active in terms of tracking the customer satisfaction gap than the other active dimensions.

417208.jpg
Figure 8: Gap movements in service quality dimensions and customer satisfaction at linings stage

Recalculation according to these active dimensions (including acknowledgement of the highly active dimensions in this mix) resulted in a gap of –1.0, which was closer to the customer satisfaction gap (-2.0) than the all-inclusive service quality gap (-0.8).

In conjunction with gap scores, there was a marked increase in incidents during the period with 27 incidents in total: 4 were positive and 23 negative. Associated features (Figure 9) include:

417209.jpg
Figure 9: Features of service incidents at linings stage (positives n=4; negatives n=23)

The strong negative bias in incidents was not fully reflected in the relatively mid-level negative gap scores. This may be a result of certain features within the overall set of incidents occurring during this stage. For instance, in Figure 8, it appears that many spontaneous situations led to secondary situations (i.e. 13 spontaneous and 12 secondary situations), thus creating a complex rolling set of problems and an ongoing state of malaise for the customer rather than resolution of the original situation. Another possible reason is simply that the customer may have reached a saturation point regarding negative incidents, meaning that after this point negative incidents had a decreasing impact on customer satisfaction and service quality gap scores.

Practical Completion Stage

As shown in Figure 10, the customer’s satisfaction improved significantly in the final stage of the project from a negative to a neutral gap score (0.0). Service quality again followed a similar trend, but the amount of movement was less pronounced (0.3).

417210.jpg
Figure 10: Gap movements in customer satisfaction and service quality at practical completion stage

From Figure 11, most dimensions moved in the same positive direction as the customer satisfaction gap. The active dimensions tracking in the same direction included empathy, reliability, responsiveness, personal assurance, work output and care in execution of work. Of these, reliability and care in execution appear more highly active in terms of tracking customer satisfaction than the others. The average of the six active dimensions equates to a gap of -0.06 (including acknowledgement of the highly active dimensions in this mix) which again was closer to the customer satisfaction gap (0.0) than the all-inclusive service quality gap (-0.3). This now strongly supports the trend that some dimensions are more active in influencing customer satisfaction than others, thus giving a more refined view of the association between customer satisfaction and service quality.

417211.jpg
Figure 11: Gap movements in service quality dimensions and customer satisfaction at practical completion

There were 7 positive and 13 negative incidents during the period. With regard to this, the ratio between the two shows a fairly strong negative bias and as at the last data point, this is not especially well aligned with the relatively neutral gap scores for this period. One reason could be the diluted influence of large numbers of negative incidents (as discussed at linings stage); where there is a reduced effect once a saturation point is reached. Another possible cause that was found in the interview data but was not apparent in earlier data gathering points concerns an apparent change in focus by the customer from a process-based perspective of service quality to one that is focused more on delivery of the end product (i.e. the completed house).

For example, the customer expressed pleasure in seeing the kitchen and bathroom tiling as finished work. Such aspects changed the complexion of the site from an industrial feel to something resembling a home. Thus, the customer seemed to appreciate what the end product would look like from an end user’s perspective. This seemed to lift the customer’s opinion of care in execution of work. In addition, the fact that the service delivery had come to an end seemed to provide relief and the prospect of life returning to normal for the customer. For convenience, this is referred to as the end of process/product realization factor. It seems to have had a positive effect on gap scores, thus counter-balancing the otherwise strong negative bias in the incident ratio.

Figure 12 presents a coded breakdown of the frequency of features present in the previously discussed incidents, showing that:

417212.jpg
Figure 12: Features of service incidents at practical completion (Positives = n= 7; Negatives n = 13)

Integrated Analysis of Data Covering All Stages of Construction

It is clear from the staged analysis that service quality gap scores influenced customer satisfaction most strongly when focusing upon active service quality dimensions. Highly active dimensions particularly stood out as being strongly associated with customer satisfaction scores. Trends across all stages of construction can now be drawn together, including:

The customer experienced a total of 69 incidents (45 negative and 24 positive), which influenced both service quality and customer satisfaction gap scores. Key overarching themes across the entire project (from Figure 12) include:

Conclusions

This study offers a highly detailed account of the incidents that occur during construction and the formation of customer-perceived service quality, including how such perceptions impact on customer satisfaction evaluations. The methodology provides a means for reliably studying conceptual patterns of behaviour and for housing contractors to diagnose service incidents (and service quality) requirements in a structured way.

As the research is based on a single in-depth case study, the findings provide a basis for theory development rather than being immediately generalizable across broad populations of customers. Further, they are limited to housing construction customers with low experience but high involvement in the process.

A consistent theme across all four stages of analysis was a link between service incidents, service quality and customer satisfaction. For instance, the ratio between positive and negative incidents appears to influence the direction of both service quality and customer satisfaction gap scores and to some extent the size of the gap as well. Even so, at times, there appears to be a saturation point reached regarding negative incidents to the point that a strong negative bias in the ratio does not necessarily have the same degree of negative impact on gaps scores; the end of process/product realization factor also appears to have its own positive impact on customer satisfaction scores toward the end of construction (i.e. when the end product can be appreciated by the customer and there is emotional relief from the construction process). Service quality gap scores influenced customer satisfaction most strongly when focusing upon active service quality dimensions. Highly active dimensions particularly stood out as being strongly associated with customer satisfaction scores. Reliability was highly active over the entire construction process and care in execution of work was highly active over the latter half of the project, which may be a result of the targeted customer being technically limited in their ability to evaluate the construction until it is closer to being a completed end product. Both dimensions should be focused upon in ongoing studies to determine if they deserve higher weighting than other dimensions in service quality measurement and to help contractors know where best to focus their service quality efforts. In contrast, tangibles were continually inactive and should be checked further to test whether or not this dimension deserves ongoing inclusion in service quality measurement during construction.

Perceptions from service incidents were predominantly triggered via spontaneous situations and, at certain stages of the project, often led to secondary situations. From a service delivery perspective, there seems to be a need to more purposefully orchestrate the situational contexts that customers encounter in order to help create positive perceptions. This is perhaps most achievable through planned rather than impromptu site visits. Further, secondary situations appear important when they lead from already negative situations. As they can either resolve or compound such situations, service providers need to more fully consider the best course of action especially to prevent any ongoing domino effect (i.e. causing rolling negative incidents).

It was apparent that the large amount of mainly negative subcontractor involvement in the case study could be avoided. It would seem that the best course of action for improved service quality would be for the builder to limit or control direct subcontractor involvement with the customer.

Incidents often concerned site observations or personal interaction, which appear related in that site observations potentially led to personal interaction (and vice versa). Both features could be more controlled with more purposeful and orchestrated site visits. Moreover, this should predominantly be with the builder (and specifically with staff who have relevant construction expertise and decision-making power) to purposefully encourage positive perceptions by the customer and address their inherent lack of technical expertise.

Core content mainly concerned progressive product quality or progressive construction activity. In addressing this issue, builders should consider the strategic benefits of fast and seamless construction onsite (in conjunction with the above-mentioned orchestrated site visits). Such an approach obviously aims to address the customer’s perceptions of progressive construction activity in a way that excites with speed, while providing a relatively limited ability for the customer to observe the process in detail. This has the potential to control the customer’s site observations and hence control stimuli relating to things like progressive product quality. Finally, it may allow the end of process/product realization factor to be leveraged to a greater extent (in a positive way) and limit the extent of personal interaction required on projects.

With regard to customer involvement, builders are often reticent to entertain customers expressing new needs, but even so, this appears to be clearly associated with positive incidents. Therefore, it may be encouraged judiciously by builders when it is suited to the nature of the project or in order to improve or re-align poor customer relations. Conversely, the builder’s staff should be trained to avoid situations where customers are forced into defensive action, as this is predominantly associated with negative incidents.

Ongoing research should aim to test the generalizability of the methodology presented in this research. There is also the need to more fully determine if themes found in this case study have relevance to a larger cross section of high-involvement but low-experience housing construction customers. Following this, there is the prospect of adapting the associated concepts and methods of incident analysis to profile other types of construction customers as well.

References

Australian Bureau of Statistics, 2011. Census of Population and Housing No. 2001.0. Canberra: Australian Bureau of Statistics.

Babakus, E. and Boller, G.W., 1992. An empirical assessment of the SERVQUAL scale. Journal of Business Research, 24(3), pp.253-68. doi: http://dx.doi.org/10.1016/0148-2963(92)90022-4

Banks, G., Robertson, D. and Shann, E., 2004. First home ownership - Productivity Commission Inquiry Report 28, Canberra: Australian Government Productivity Commission. Available at: <http://www.pc.gov.au/_data/assets/pdf_file/0016/56302/housing.pdf>.

Bednall, S. and Kanuk, W., 1997. Consumer behavior. Sydney: Prentice Hall.

Bei, L.T. and Chiao, Y.C., 2001. An integrated model for the effects of perceived product, perceived service quality, and perceived price fairness on consumer satisfaction and loyalty. Journal of Consumer Satisfaction Dissatisfaction and Complaining Behaviour, 14, pp.125-40.

Bettman, J.R., 1979. Information processing theory of consumer choice. Reading: Addison Wesley.

Bolton, R.N. and Drew, J.H., 1991. A longitudinal analysis of the impact of service changes on customer attitudes. The Journal of Marketing, pp.1-9. doi: http://dx.doi.org/10.2307/1252199

Boyatzis, R.E., 1998. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage Publications.

Brown, T.J., Churchill Jr, G.A. and Peter, J.P., 1993. Research Note: Improving the Measurement of Service Quality. Journal of Retailing, 69(1), p.127. doi: http://dx.doi.org/10.1016/S0022-4359(05)80006-5

Bubshait, A.A., Farooq, G., Osama Jannadi, M. and Assaf, S.A., 1999. Quality practices in design organizations. Construction Management and Economics, 17(6) pp.799-809. doi: http://dx.doi.org/10.1080/014461999371132

Cardozo, R., 1965. An experimental study of consumer effort, expectation and satisfaction. Journal of Marketing Research, 2(August), pp.244-9. doi: http://dx.doi.org/10.2307/3150182

Carman, J. M., 1990. Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions. Journal of retailing.

Cravens, D.W., Holland, C.W., Lamb Jr, C.W. and Moncrief III, W.C., 1988. Marketing’s role in product and service quality. Industrial Marketing Management, 17(4), pp.285-304. doi: http://dx.doi.org/10.1016/0019-8501(88)90032-6

Cronin Jr, J.J., Brady, M.K. and Hult, G.T.M., 2000. Assessing the effects of quality, value, and customer satisfaction on consumer behavioural intentions in service environments. Journal of retailing, 76(2), pp.193-218. doi: http://dx.doi.org/10.1016/S0022-4359(00)00028-2

Cronin Jr, J.J. and Taylor, S.A., 1992. Measuring service quality: a re-examination and extension. The journal of marketing, pp.55-68. doi: http://dx.doi.org/10.2307/1252296

Cronin Jr, J.J. and Taylor, S.A., 1994. SERVPERF Versus SERVQUAL: Reconciling Performance-Based and Perceptions-Minus-Expectations Measurement of Service Quality. Journal of Marketing, 58(1) pp.125-31. doi: http://dx.doi.org/10.2307/1252256

Crosby, P.B., 1984. Quality without tears. USA: McGraw Hill.

Day, E. and Barksdale Jr, H.C., 1992. How firms select professional services. Industrial Marketing Management, 21(2) pp.85-91. doi: http://dx.doi.org/10.1016/0019-8501(92)90002-B

Deming, W.E., 1982. Out of the Crisis. 18th ed. Cambridge: Cambridge University Press.

Engel, J., Kollat, D. and Blackwell, R., 1968. Consumer Behavior. New York: Holt, Rinehart and Winston.

Engel, J.F., Blackwell, R.D. and Miniard, P.W., 1993. Consumer Behavior. 7th ed. Fort Worth: Dryden Press.

Feigenbaum, A.V., 1983. Total Quality Control. 3rd ed. New York: McGraw Hill.

Flyvbjerg, B., 2006. Five misunderstandings about case-study research. Qualitative inquiry, 12(2) pp.219-45. doi: http://dx.doi.org/10.1177/1077800405284363

Forsythe, P., 2007a. A conceptual framework for studying customer satisfaction in residential construction. Construction management and economics, 25(February), pp.171-82. doi: http://dx.doi.org/10.1080/01446190600771439

Forsythe, P., 2007b. An instrument for measuring customer perceived service quality in housing construction. Paper presented to the CIB W092 2007 Interdisciplinarity in Built Environment Procurement. Hunter Valley.

Forsythe, P.J., 2012. Profiling customer perceived service quality expectations in made-to-order housing construction in Australia. Engineering, Construction and Architectural Management, 19(6), pp.587-609. doi: http://dx.doi.org/10.1108/09699981211277522

Foxall, G.R., 1990. Consumer Psychology in Behavioural Perspective. London: Routledge.

Garvin, D., 1987. Competing on the eight dimensions of quality. Harvard Business Review, 65(6), pp.101-9.

Garvin, D.A., 1983. Quality on the line. Harvard Business Review, 61(5) pp.64-75.

Holm, M.G., 2000a. Service management in housing refurbishment: a theoretical approach. Construction Management and Economics, 18(5), pp.525-33. doi: http://dx.doi.org/10.1080/014461900407338

Holm, M.G., 2000b. Service quality and product quality in housing refurbishment. International Journal of Quality Management, 17(4-5), pp.527-40.

Holsti, O., 1969. Content Analysis for the Social Sciences and Humanities. Reading: Addison Wesley.

Howard, J.A. and Sheth, J.N., 1969. The theory of buyer behaviour. New York: Wiley.

Hoxley, M., 1994. Assessment of building surveying service quality: process or outcome? RICS research series paper, 1(8).

Hoxley, M., 2000. Measuring UK construction professional service quality: the what, how, when and who. International Journal of Quality and Reliability Management, 17(4/5), pp.511-26. doi: http://dx.doi.org/10.1108/02656710010298553

International Standards Organisation, 2008. ISO 9001:2008 Quality management systems – Requirements. Switzerland: ISO.

Jäger, C., 2008. The Principal-Agent-Theory Within the Context of Economic Sciences. Books on Demand.

Jain, S.K. and Gupta, G., 2004. Measuring service quality: SERVQUAL vs. SERVPERF scales. Vikalpa, 29(2), pp.25-37.

Jensen, M.C. and Meckling, W.H., 1979. Theory of the firm: Managerial behaviour, agency costs, and ownership structure. In: K. Brunner, ed. Netherlands: Springer. pp.163-231.

Joint Select Committee on the Quality of Buildings, 2002. Report on the Quality of Buildings Parliamentary Paper No. 156. Sydney: Parliament of NSW.

Juran, J.M., 1988. Juran on Planning for Quality. New York: The Free Press.

Kotler, P. and Armstrong, G., 2013. Principles of Marketing 15th Global Edition. Pearson.

Lai, A.W. and Pang, P.S., 2010. Measuring performance for building maintenance providers. Journal of construction engineering and management, 136(8), pp.864-76. doi: http://dx.doi.org/10.1061/(ASCE)CO.1943-7862.0000191

Ling, F.Y.Y. and Chong, C.L.K., 2005. Design-and-build contractors’ service quality in public projects in Singapore. Building and Environment, 40(6), pp.815-23. doi: http://dx.doi.org/10.1016/j.buildenv.2004.07.017

Love, P.E.D., Smith, J., Treloar, G.J. and Li, H., 2000. Some empirical observations of service quality in construction. Engineering Construction and Architectural Management (Blackwell Publishing Limited), 7(2), p.191. doi: http://dx.doi.org/10.1108/eb021144, http://dx.doi.org/10.1046/j.1365-232X.2000.00147.x

Magnini, V.P., Kara, D., Crotts, J.C. and Zehrer, A., 2012. Culture and service-related positive disconfirmations. An application of travel blog analysis. Journal of Vacation Marketing, 18(3), pp.251-7. doi: http://dx.doi.org/10.1177/1356766712449371

Nelson, S.L. and Nelson, T.R., 1995. RESERV: an instrument for measuring real estate brokerage service quality. Journal of Real Estate Research, 10(1), pp.99-113.

Nicosia, F.M., 1966. Consumer decision processes: marketing and advertising implications. Englewood Cliffs, N.J: Prentice-Hall.

Oliver, R.L., 1977. Effect of expectation and disconfirmation on postexposure product evaluations: An alternative interpretation. Journal of applied psychology, 62(4), p.480. doi: http://dx.doi.org/10.1037/0021-9010.62.4.480

Oliver, R.L., 1980a. A cognitive model of the antecedents and consequences of satisfaction decisions. Journal of marketing research, pp.460-9. doi: http://dx.doi.org/10.2307/3150499

Oliver, R.L., 1980b. Theoretical cases of consumer satisfaction research: review, critique and future research. In: C.W. Lamb Jr and P.M. Dunne, eds. Theoretical Developments in Marketing. Chicago: American Marketing Association.

Oliver, R.L., 1981. Measurement and evaluation of satisfaction processes in retail settings. Journal of retailing.

Oliver, R.L., 1993. A conceptual model of service quality and customer satisfaction. In: T. Swartz, D.E. Bowen and S.W. Brown, eds. Advances in Services Marketing and Management: Research and Practice, Greenwich: JAI Press 2.

Oliver, R.L., 2006. Co-producers and co-particpants in the satisfaction process. In: R.F. Lusch and S.L. Vargo, eds. The service-dominant logic of marketing: dialog, debate, and directions. New York: ME Sharpe Inc. p.118.

Oliver, R.L., 2010. Satisfaction: A behavioral perspective on the consumer. 2nd ed. London: ME Sharpe.

Parasuraman, A., Berry, L.L. and Zeithaml, V.A., 1991. Refinement and Reassessment of the SERVQUAL Scale. Journal of Retailing, 67(4), p.420.

Parasuraman, A., Berry, L.L. and Zeithaml, V.A., 1993. Research Note: More on Improving Service Quality Measurement. Journal of Retailing, 69(1), p.140. doi: http://dx.doi.org/10.1016/S0022-4359(05)80007-7

Parasuraman, A., Zeithaml, V.A. and Berry, L.L., 1985. A conceptual model of service quality and its implications for future research. The Journal of Marketing, pp.41-50. doi: http://dx.doi.org/10.2307/1251430

Parasuraman, A., Zeithaml, V.A. and Berry, L.L., 1988. SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality. Journal of Retailing, 64(1), pp.5-6.

Parasuraman, A., Zeithaml, V.A. and Berry, L.L., 1994a. Reassessment of expectations as a comparison Standard in Measuring Service quality. Journal Of Marketing, 58(January), pp.111-24. doi: http://dx.doi.org/10.2307/1252255

Parasuraman, A., Zeithaml, V.A. and Berry, L.L., 1994b. Alternative scales for measuring service quality: a comparative assessment based on psychometric and diagnostic criteria. Journal of retailing, 70(3), pp.201-30. doi: http://dx.doi.org/10.1016/0022-4359(94)90033-7, http://dx.doi.org/10.1016/0022-4359(94)90032-9

Patterson, P. and Johnson, L., 1993. Disconfirmation of expectations and the gap model of service quality: an integrated paradigm. Journal of Consumer Satisfaction, Dissatisfaction and Complaining Behavior, 6, pp.90-9.

Procter, C. and Rwelamila, P., 1999. Service quality in the quantity surveying profession in South Africa. In: Conference Proceedings, Customer Satisfaction: A Focus for Research and Practice in Construction. Cape Town, 5-10 September. Cape Town: E.F. and N. Spons.

Ragin, C.C., 1992. “Casing” and the process of social inquiry. In: What is a case? Exploring the foundations of social inquiry. Cambridge: Cambridge University Press. pp.217-26.

Samson, D. and Parker, R., 1994. Service quality: the gap in the Australian consulting engineering industry. International Journal of Quality and Reliability Management, 11(7), pp.60-76. doi: http://dx.doi.org/10.1108/02656719410738993

Seth, N., Deshmukh, S. and Vrat, P., 2005. Service quality models: a review. International Journal of Quality and Reliability Management, 22(9), pp.913-49. doi: http://dx.doi.org/10.1108/02656710510625211

Shewhart, W.A., 1931. Economic control of quality of manufactured product. 509. ASQ Quality Press.

Siu, G.K.W., Bridge, A. and Skitmore, M., 2001. Assessing the service quality of building maintenance providers: mechanical and engineering services. Construction Management and Economics, 19(7), pp.719-26. doi: http://dx.doi.org/10.1080/01446190110062104

Spreng, R.A. and Mackoy, R.D., 1996. An empirical examination of a model of perceived service quality and satisfaction. Journal of retailing, 72(2), pp.201-14. doi: http://dx.doi.org/10.1016/S0022-4359(96)90014-7

Sunindijo, R.Y., Hadikusumo, B.H. and Phangchunun, T., 2014. Modelling service quality in the construction industry. International Journal of Business Performance Management, 15(3), pp.262-76. doi: http://dx.doi.org/10.1504/IJBPM.2014.063026

Taguchi, G., 1986. Introduction to Quality Engineering. Tokyo: Asian Productivity Organisation.

Tranfield, D., Rowe, A., Smart, P., Levene, R., Deasley, P. and Corley, J., 2005. Coordinating for service delivery in public-private partnership and private finance initiative construction projects: early findings from an exploratory study. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 219(1), pp.165-75. doi: http://dx.doi.org/10.1243/095440505X8037

Vandermerwe, S., 1994. Quality in services: the ‘softer’side is ‘harder’(and smarter). Long Range Planning, 27(2), pp.45-56. doi: http://dx.doi.org/10.1016/0024-6301(94)90208-9

Vargo, S.L. and Lusch, R.F., 2004. Evolving to a new dominant logic for marketing. Journal of marketing, 68(1), pp.1-17. doi: http://dx.doi.org/10.1509/jmkg.68.1.1.24036

Victor, B. and Boynton, A.C., 1998. Invented here ‘Maximising your organisation’s internal growth and profitability: A practical guide to transforming work. Boston: Harvard Business School Press.

Walter, L., 2005. Six Sigma: is it Really Different? Quality and Reliability Engineering International, 21(2), pp.221-4. doi: http://dx.doi.org/10.1002/qre.633

Wilson, A., Zeithaml, V.A., Bitner, M.J. and Gremler, D.D., 2012. Services marketing: Integrating customer focus across the firm. McGraw-Hill.

Winch, G., Usmani, A. and Edkins, A., 1998. Towards total project quality: a gap analysis approach. Construction Management and Economics, 16(2), pp.193-207. doi: http://dx.doi.org/10.1080/014461998372484

Womack, J.P., Jones, D.T. and Roos, D., 2008. The machine that changed the world. New York: Simon and Schuster.

Yi, Y., 1990. A critical review of consumer satisfaction. Review of marketing, 4(1), pp.68-123.

Yin, R.K., 2009. Case study research: Design and methods. 4th ed. USA: Sage Publications.

Yüksel, A. and Yüksel, F., 2001. The expectancy-disconfirmation paradigm: a critique. Journal of Hospitality and Tourism Research, 25(2), pp.107-31. doi: http://dx.doi.org/10.1177/109634800102500201

Zeithaml, V.A., 1988. Consumer perceptions of price, quality, and value: a means-end model and synthesis of evidence. The Journal of Marketing, pp.2-22. doi: http://dx.doi.org/10.2307/1251446

Appendix A

BUILDSERV and customer satisfaction survey scoring instrument

Customer Survey during Construction

Name………………………………………………………….. Date…………………………

Attached is a survey asking you to report your experiences on the current stage of construction project.

The survey is in two parts. The first asks the quality of service provided by your builder and focuses on the actual building process. The second part focuses on the quality of the physical construction (e.g. the end product) provided by the service.

In answering the surveys please skim through the variety of questions before answering them. This will help you give a carefully considered response. Respond to each item from your own perspective, we are interested in how you are finding the building experience and your views of the outcomes.

Before commencing the survey could you please indicate by circling a number on the scale below, how satisfied you have been with the overall performance of your builder at this point in time.

417213.jpg

Customer’s Perception of Service Quality


Name………………………………………………………….. Date…………………………

Please think about the quality of service provided on your project, compared to your ideal level of service.

Your ‘ideal’ level is the level of performance you believe a building companycan and should provide.

Please rate the service provided, by circling a number between 1 and 9 for all of the following statements, unless you have no opinion

The Builder’s service performance is:
When it comes to: Lower than my
ideal level
The same as my
ideal level
Higher than
my ideal level
No
Opinion
1. Providing service as promised 1 2 3 4 5 6 7 8 9 N
2. Dependability in handling customer’s service problems 1 2 3 4 5 6 7 8 9 N
3. Performing services right the first time 1 2 3 4 5 6 7 8 9 N
4. Providing services at the promised time 1 2 3 4 5 6 7 8 9 N
5. Maintaining error free records (e.g. financial) 1 2 3 4 5 6 7 8 9 N
6. Keeping customers informed of when services will be performed 1 2 3 4 5 6 7 8 9 N
7. Prompt service to customers 1 2 3 4 5 6 7 8 9 N
8. Willingness to help customers 1 2 3 4 5 6 7 8 9 N
9. Readiness to respond to customer’s requests 1 2 3 4 5 6 7 8 9 N
10. Confidence in trustworthiness and honesty 1 2 3 4 5 6 7 8 9 N
11. Make customers feel safe in their transactions 1 2 3 4 5 6 7 8 9 N
12. Being consistently courteous, 1 2 3 4 5 6 7 8 9 N
13. Having the knowledge to answer customer questions 1 2 3 4 5 6 7 8 9 N
14. Giving customers individual attention 1 2 3 4 5 6 7 8 9 N
15. Dealing with customers in a caring fashion 1 2 3 4 5 6 7 8 9 N
16. Having the customer’s best interests at heart 1 2 3 4 5 6 7 8 9 N
17. Understanding the needs of their customers 1 2 3 4 5 6 7 8 9 N
18. Convenient business hours 1 2 3 4 5 6 7 8 9 N
19. Modern equipment 1 2 3 4 5 6 7 8 9 N
20. Visually appealing facilities 1 2 3 4 5 6 7 8 9 N
21. Having a neat, professional appearance 1 2 3 4 5 6 7 8 9 N
22. Initiative (e.g. in solving problems and customising the work) 1 2 3 4 5 6 7 8 9 N
23. Pride in work 1 2 3 4 5 6 7 8 9 N
24. Cooperation and flexibility 1 2 3 4 5 6 7 8 9 N
25. Open communicator 1 2 3 4 5 6 7 8 9 N
26. Applying knowledge and experience 1 2 3 4 5 6 7 8 9 N