Vol. 19, n. 1, febbraio 2026

STRUMENTI

La Short Supervisor Feedback Environment Scale: evidenze delle proprietà psicometriche della versione italiana

Elena Lo Piccolo,1 Marco Giovanni Mariani2 e Gerardo Petruzziello3

Sommario

Il supervisor feedback environment rappresenta un fattore contestuale chiave che influenza il modo in cui i lavoratori interpretano e utilizzano il feedback nelle interazioni quotidiane di lavoro. Sebbene la versione breve della Supervisor Feedback Environment Scale sia stata ampiamente utilizzata nella ricerca organizzativa, nel contesto italiano sono finora mancate evidenze sistematiche sulla sua validità e attendibilità.

Il presente studio si propone di fornire prime evidenze sulle proprietà psicometriche della versione italiana della Short Supervisor Feedback Environment Scale. La scala è stata somministrata a un campione di 368 dipendenti della pubblica amministrazione italiana. Le analisi fattoriali confermative hanno indicato che una struttura di secondo ordine fornisce una rappresentazione adeguata dei dati, mostrando un adattamento migliore rispetto a una soluzione unidimensionale. La scala ha mostrato una consistenza interna soddisfacente ed evidenze di validità di costrutto attraverso associazioni teoricamente coerenti con il work engagement, la soddisfazione lavorativa, il benessere soggettivo (WHO-5), la feedback orientation e il carico di lavoro percepito.

Nel complesso, i risultati supportano l’affidabilità e la validità della versione italiana breve della Supervisor Feedback Environment Scale e ne suggeriscono l’utilità come strumento conciso per la ricerca e per finalità di valutazione nei contesti organizzativi.

Parole chiave

Ambiente di feedback, Scala breve, Analisi fattoriale confermativa, Affidabilità e validità, Adattamento italiano.

Assessment tools

The Italian Version of the Short Supervisor Feedback Environment Scale: Evidence of Psychometric Properties

Elena Lo Piccolo,4Marco Giovanni Mariani5 and Gerardo Petruzziello6

Abstract

The supervisor feedback environment is a key contextual factor shaping how employees interpret and use feedback in everyday work interactions. Although the short Supervisor Feedback Environment Scale has been widely used in organizational research, evidence on the reliability and validity of the Italian version is still limited.

This study provides preliminary evidence on the psychometric properties of the Italian short Supervisor Feedback Environment Scale in a sample of 368 employees working in the Italian public administration. Confirmatory factor analyses supported a second-order structure, showing a better fit than a unidimensional solution. The scale showed satisfactory internal consistency, and construct validity was supported by theoretically coherent associations with work engagement, job satisfaction, employee well-being, feedback orientation, and perceived workload.

Overall, findings support the reliability and validity of the Italian short Supervisor Feedback Environment Scale and suggest that it may represent a brief tool for research and assessment purposes in organizational settings.

Keywords

Feedback environment, Supervisor feedback, Scale validation, Confirmatory factor analysis, Italian version.

Introduction

In contemporary work settings characterized by ongoing change and increasing demands for continuous learning, employability, and sustainable career development, feedback represents a key resource for both employees and organizations. Far from being limited to an evaluative function, feedback has long been conceptualized as a central driver of organizational learning, employee development, and performance improvement (Ilgen et al., 1979; Kluger & DeNisi, 1996). Within the organizational sciences, feedback occupies a foundational position, supported by a long-standing theoretical and empirical tradition documenting its relevance across domains such as motivation, learning and performance management (Anseel & Sherf, 2024).

Research has moved beyond viewing feedback as a discrete event tied to formal evaluations, instead conceptualizing it as an ongoing, socially embedded process embedded in everyday work interactions and continuous performance management (London & Smither, 2002; Schleicher et al., 2018). From this perspective, feedback experiences are shaped over time by recurring interactions and their relational context, forming a dynamic and cyclical process in which prior experiences influence subsequent feedback and learning (Anseel & Sherf, 2024). Building on this process-oriented view, growing attention has been devoted to the feedback environment as a key construct capturing the contextual and relational conditions under which day-to-day feedback occurs in organizations. The feedback environment refers to employees’ perceptions of the contextual and relational characteristics that shape how feedback is typically exchanged, interpreted, and used in ongoing day-to-day interactions with supervisors and co-workers (Steelman et al., 2004). By conceptualizing feedback as a relational, context-dependent process, this framework helps explain why similar feedback may be differentially interpreted and used by employees in everyday work settings.

Theoretical and empirical research has consistently shown that the feedback environment is meaningfully associated with a wide range of employee attitudes and outcomes (Katz et al., 2021). This research has advanced the understanding that the feedback environment is not merely a contextual backdrop for feedback exchanges, but a meaningful and consequential psychological construct with implications for both employee functioning and organizational effectiveness. More recently, this line of research has been further extended by the theorization of employees’ reactions to the feedback environment (Elicker et al., 2019). Moving beyond feedback episodes, scholars have proposed that individuals develop relatively stable attitudinal evaluations of the day-to-day feedback context — such as satisfaction with, perceived fit with, and perceived fairness of the feedback environment — which help explain how feedback environments translate into important work-related outcomes (Elicker et al., 2019). Together, these developments point to the feedback environment as a mature, theoretically relevant construct that warrants continued empirical attention and rigorous measurement.

From an applied perspective, the supervisor feedback environment is also highly relevant for counseling and organizational support practices, as it provides a useful framework for assessing everyday supervisory interactions, identifying relational resources and criticalities, and informing interventions aimed at promoting employee well-being, learning, and sustainable functioning at work.

Theoretical Background

Although the concept of feedback environment was initially introduced in the organizational literature with a largely descriptive focus (Herold & Parsons, 1985), it was systematically conceptualized and operationalized as a multidimensional construct by Steelman et al. (2004). In their model, the feedback environment is defined in terms of feedback received from supervisors and co-workers and comprises seven core facets that characterize the quality of everyday, informal feedback interactions. In line with the original conceptualization of the Supervisor Feedback Environment Scale (Steelman et al., 2004), the supervisor feedback environment is operationalized through seven distinct facets assessed by a total of 32 items. These facets include source credibility (5 items), referring to the perceived expertise and trustworthiness of the feedback provider; feedback quality (5 items), capturing the usefulness, consistency, and informational value of the feedback provided; and feedback delivery (5 items), reflecting the degree of tact, consideration, and interpersonal sensitivity shown toward the feedback recipient.

In addition, the scale assesses favorable feedback (4 items), referring to the perceived frequency of positive feedback when performance warrants it; unfavorable feedback (4 items), capturing the perceived frequency of negative feedback that accurately reflects performance shortcomings; source availability (5 items), reflecting the perceived accessibility of the supervisor for providing performance-related information; and promotion of feedback seeking (4 items), referring to the extent to which the supervisor encourages, supports, and responds constructively to employees’ feedback-seeking behaviors.

The model further encompasses the frequency of favorable and unfavorable feedback —conceptualized as accurate praise and constructive criticism contingent on performance — as well as the availability of sources and the extent to which the environment actively promotes feedback seeking. Taken together, these dimensions reflect the situational and relational support that employees perceive in their habitual feedback interactions (Steelman et al., 2004).

Although the feedback environment framework distinguishes between supervisor and co-worker feedback sources, the present study focuses on the supervisor feedback environment, which has received most attention in the empirical literature and has been most consistently linked to key relational, attitudinal, and performance-related outcomes. The validity of the supervisor feedback environment is supported by extensive empirical evidence linking favorable feedback environments to a wide range of positive organizational outcomes (Katz et al., 2021). A meta-analysis conducted by Katz, Rauvola, and Rudolph (2021) showed that the feedback environment is strongly associated with leader-member exchange (LMX; rc = .81), job satisfaction (rc = .51), and trust in the supervisor (rc = .74), and is also positively related to feedback orientation. Moreover, the feedback environment functions as a key job resource for employee well-being, exhibiting a robust negative relationship with burnout (rc = -.51) and demonstrating incremental validity in predicting well-being outcomes beyond related constructs such as LMX and feedback orientation (Katz et al., 2021). Moreover, evidence from studies using short versions of the scale indicates that a favorable feedback environment is positively associated with employee performance (r = 0.23) (Gallo et al., 2022).

While the notion of the feedback environment emerged earlier (Herold & Parsons, 1985), its systematic measurement was established by Steelman et al. (2004) through the development of the Feedback Environment Scale, which remains the dominant instrument used in empirical research on feedback environments. The psychometric validation of the Feedback Environment Scale was originally conducted by Steelman, Levy, and Snell (2004). In their validation study, the authors administered the 32-item scale to a sample of 405 employees from two organizations and tested the proposed structure separately for supervisor and co-worker feedback environments. In developing the items of the Feedback Environment Scale, Steelman et al. (2004) drew directly on the existing feedback and performance management literature in order to operationalize the day-to-day contextual characteristics of feedback processes. Rather than adapting items from a single pre-existing scale, the authors generated new items intended to reflect how feedback is typically enacted in everyday work interactions. Item content was informed by prior conceptual and empirical work on feedback sources, feedback quality, credibility, and delivery (e.g., Ilgen et al., 1979; Giffin, 1967), as well as by practitioner-oriented descriptions of common feedback problems in organizations. Confirmatory factor analyses supported the hypothesized seven-factor structure for both sources, although model fit was consistently stronger for the supervisor feedback environment than for the co-worker feedback environment, particularly the supervisor version. The seven-factor model showed excellent fit. Internal consistency estimates were high across both sources. Cronbach’s alpha coefficients for the supervisor subscales ranged from .82 to .92, with an overall reliability of .96. Evidence of temporal stability was also provided through a test-retest design over a four- to five-month interval, yielding higher stability coefficients for the supervisor feedback environment (overall r = .85).

Beyond the original validation by Steelman et al. (2004), and to the best of our knowledge, the full version of the Feedback Environment Scale has been formally validated in only one additional study, namely the Japanese cross-cultural adaptation by Momotani and Otsuka (2018). Aside from this contribution, formal psychometric validation of the full scale in other cultural or organizational contexts appears largely absent. In line with the original study, Momotani and Otsuka (2018) did not conduct an exploratory factor analysis, but directly tested the a priori seven-factor structure of the Feedback Environment Scale using confirmatory factor analysis for both the supervisor and co-worker versions. The hypothesized model showed acceptable fit, with slightly stronger support for the supervisor feedback environment (CFI = .90, RMSEA = .07), thereby providing evidence for its factorial validity in the Japanese context. Reliability estimates were generally satisfactory across dimensions (Cronbach’s α ranging from .68 to .92; overall α = .96). Construct validity was supported by positive associations with feedback seeking, leader-member exchange, job satisfaction, and work engagement, as well as negative associations with indicators of psychological distress. Taken together, existing psychometric evidence on the Feedback Environment Scale — although supportive of its factorial structure and reliability — has relied almost exclusively on the original, full-length instrument. Beyond highlighting the robustness of the underlying model, this also raises practical considerations regarding its feasibility for applied and large-scale research.

Within the broader Feedback Environment Scale framework, which distinguishes between feedback received from different sources, the supervisor feedback environment is operationalized through a 32-item measure encompassing seven distinct facets (Steelman et al., 2004). While this level of detail provides a comprehensive representation of the construct, the instrument’s length may limit its applicability in field studies and multi-construct survey designs, where respondent burden and survey length are critical constraints. In response to these practical considerations, abbreviated measures of the feedback environment have been increasingly adopted in the literature. Accordingly, Rosen (2006) developed a shortened version of the Feedback Environment Scale to improve feasibility while retaining the original theoretical framework and dimensional structure. The abbreviated scale was derived through a systematic item reduction process grounded in the framework proposed by Stanton et al. (2002), which combines judgmental, internal, and external criteria. Negatively worded and semantically redundant items were removed to enhance clarity and reduce respondent burden. Importantly, the reduction resulted in only negligible changes in facet-level reliability, and associations with relevant external variables — such as job satisfaction and perceptions of organizational politics — remained substantively unchanged (Rosen, 2006). In the original study, the short Supervisor Feedback Environment Scale demonstrated excellent internal consistency (Cronbach’s α = .95; α = .94 for the co-worker version).

The short version preserves the seven dimensions originally proposed by Steelman et al. (2004) and maintains the same source-specific structure, ensuring continuity with the underlying theoretical model. Items are rated on a 7-point Likert-type scale (1 = strongly disagree to 7 = strongly agree), consistent with the response format of the original instrument. Since its introduction, this abbreviated measure has been widely used in empirical research, particularly in field and applied organizational settings (e.g., Dahling et al., 2010, 2015; Borden et al., 2017). Across subsequent studies, the supervisor feedback environment has consistently shown high internal consistency at the scale level, with Cronbach’s alpha coefficients ranging from .91 (Dahling et al., 2010) to .95 (Borden et al., 2017). Despite its use, formal psychometric validation of the short version in non-English and cross-cultural contexts remains limited.

Objectives of the Present Study

Although the short Supervisor Feedback Environment Scale developed by Rosen (2006) has been used in prior empirical research, it has not yet been formally validated in the Italian context. Evidence from non-Italian samples has generally supported its reliability and construct-related validity (e.g., Dahling et al., 2010, 2015; Borden et al., 2017); however, its factorial structure and overall psychometric functioning remain unexplored in Italian work settings. Given the central role of the supervisor feedback environment in shaping everyday feedback processes and key work-related outcomes, including employee well-being and motivation (Katz et al., 2021), a formal psychometric evaluation of this instrument in the Italian context is particularly warranted. Against this background, the present study addresses this gap by examining the psychometric properties of the Italian version of the short Supervisor Feedback Environment Scale. Accordingly, the primary objective of the study was to provide initial evidence of the reliability and validity of the Italian short Supervisor Feedback Environment Scale. Specifically, the study aimed to (a) test its hypothesized second-order factorial structure, (b) assess the internal consistency of the identified dimensions, and (c) examine nomological validity through theoretically grounded associations with relevant work-related variables.

Method

Participants

The sample consisted of 368 employees working in the Italian public administration. With regard to gender, 47.7% of participants identified as women, 41.2% as men, and 4.7% as other or preferred not to disclose their gender; gender information was missing for 6.4% of the sample. Participants were predominantly middle-aged, with most respondents falling within the 41-55 years age range. To protect respondents’ anonymity, educational attainment was not collected. Regarding organizational sector, participants were employed across different areas of public administration, including legal, control, core administrative functions, and services, with the largest proportion working in the control and core sectors. In terms of organizational classification, the majority of respondents were in the area of functionaries (83.1%), a role category requiring a university degree, while a smaller proportion were in the area of assistants (16.9%).

Measures

All study variables were assessed using validated self-report measures. Whenever available, Italian versions with established psychometric properties were employed. For all multi-item scales, internal consistency reliability was estimated using Cronbach’s alpha in the present sample.

Supervisor Feedback Environment

The supervisor feedback environment was measured using the Italian short version of the Supervisor Feedback Environment Scale (24 items), grounded in the theoretical framework originally proposed by Steelman et al. (2004). The scale assesses multiple facets of feedback exchanges with the supervisor and is conceptually organized into seven dimensions. Participants responded on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Example items include «My supervisor is generally familiar with my performance on the job», «My supervisor gives me useful feedback about my job performance», and «My supervisor is supportive when giving me feedback about my job performance». In the present sample, internal consistency was satisfactory to excellent across dimensions (Cronbach’s α ranging from .64 to .93), with excellent reliability for the overall scale (α = .94).

Feedback Orientation

Feedback orientation was assessed using the Italian version (16 items) (Lo Piccolo et al., 2025) of the Feedback Orientation Scale (Linderbaum & Levy, 2010). The scale comprises four dimensions —Utility, Responsibility, Social Awareness, and Feedback Self-Efficacy — and demonstrated excellent internal consistency in the present sample (Utility α = .93; Responsibility α = .84; Social Awareness α = .88; Self-Efficacy α = .85). Items capture the extent to which individuals perceive feedback as useful for achieving their goals (e.g., «I find feedback essential to achieving my goals»), feel responsible for acting upon it (e.g., «I feel responsible for following up on feedback appropriately»), use it to understand how they are perceived by others (e.g., «Feedback helps me understand how others perceive me»), and feel confident in their ability to manage feedback effectively (e.g., «I believe I am able to use the feedback I receive effectively»). Responses were provided on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree), with higher scores indicating a more positive orientation toward feedback.

Job Satisfaction

Overall job satisfaction was measured using a single-item indicator adapted from Wanous et al. (1997): «Overall, how satisfied are you with your job?». Responses were recorded on a 5-point scale ranging from 1 (not at all satisfied) to 5 (completely satisfied). Single-item measures of job satisfaction have been shown to provide valid global assessments of the construct in organizational research.

Work Engagement

Work engagement was assessed using three items derived from the Utrecht Work Engagement Scale (UWES-9; Schaufeli & Bakker, 2003; Balducci et al., 2010; Paganin & Petruzziello, 2025), with one item representing each core dimension of engagement: vigor («At my work, I feel bursting with energy»), dedication («I am enthusiastic about my job»), and absorption («I am immersed in my work»). Item selection was guided by prior empirical evidence (e.g., Schaufeli et al., 2017) and by considerations of linguistic clarity and content appropriateness for the Italian working population. Responses were provided on a 7-point frequency scale ranging from 0 (never) to 6 (every day). The scale showed acceptable internal consistency in the present sample (α = .74).

Workload

Perceived workload was assessed using four items from the Demands dimension of the short Italian version of the UK Health and Safety Executive Stress Indicator Tool (Toderi et al., 2013; Balducci et al., 2015): «I have unattainable deadlines,» «I have to neglect some tasks because I have too much to do,» «I am pressured to work beyond regular hours,» and «I have time deadlines that are impossible to meet.» Items were rated on a 5-point Likert-type frequency scale ranging from 1 (never) to 5 (always), with higher scores indicating higher perceived workload. In the present sample, the scale demonstrated good internal consistency (α = .83).

Employee well-being

Employee well-being was assessed using the WHO-5 Well-Being Index (Bech et al., 2006), a brief self-report measure developed by the World Health Organization and validated in the Italian context by Cedrone et al. (2017). The instrument consists of five positively worded items assessing subjective well-being over the previous two weeks (e.g., feeling cheerful, calm, active and rested). Responses are provided on a 6-point Likert-type scale ranging from 0 (never) to 5 (all of the time). Item scores are summed to obtain a total score, with higher values indicating higher levels of perceived well-being. In the present sample, the scale showed excellent internal consistency (Cronbach’s α = .90).

Procedure

The study adopted a cross-sectional design and surveyed employed adults in the Italian public administration through an anonymous online questionnaire. Participants were recruited on a voluntary basis through institutional communication channels and internal mailing lists. The study was designed and reported in accordance with the APA Journal Article Reporting Standards for quantitative research involving new data collection and structural equation modeling (Appelbaum et al., 2018, Tables 1 and 7). The research complied with the principles of the Declaration of Helsinki and received approval from the Ethics Committee of the authors’ institution (approval code: 0217447). Before data collection, participants were informed about the aims of the study and provided electronic informed consent.

The short Supervisor Feedback Environment Scale was translated into Italian following the back-translation procedure outlined by Brislin (1970). Two independent bilingual translators produced the forward and backward translations, which were subsequently compared to ensure semantic and conceptual equivalence between the Italian and original English versions.

Before the main analyses, inter-item correlations were examined to assess multicollinearity. No correlations exceeded the .85 threshold, indicating the absence of problematic item redundancy (Weston & Gore, 2006).

Data Analysis

To address the study objectives, a series of confirmatory factor analytic and correlational analyses were conducted to evaluate the psychometric properties of the Italian version of the short Supervisor Feedback Environment Scale. All analyses were conducted within the theoretical framework of the feedback environment articulated by Steelman et al. (2004).

First, the factorial structure of the scale was examined through confirmatory factor analysis by comparing a theoretically grounded multidimensional model, consistent with the original framework, with a more parsimonious unidimensional alternative, following common practice in validation studies (Brown, 2015; Momotani & Otsuka, 2018). Confirmatory factor analyses were performed in R using the lavaan package, applying the robust maximum likelihood (MLR) estimator to account for potential nonnormality.

Model fit was evaluated using the comparative fit index (CFI), the Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA). In line with contemporary recommendations, multiple fit indices were jointly considered rather than relying on rigid cutoff values (Brown, 2015; Kline, 2011). CFI and TLI values close to or above .90 were interpreted as indicative of acceptable fit, particularly in complex, multidimensional item-level models. With regard to RMSEA, values in the .08-.10 range were interpreted as reflecting a mediocre but acceptable level of fit (MacCallum et al., 1996), consistent with evidence that RMSEA may be conservative in multifactor models with higher complexity and degrees of freedom (Marsh et al., 2004; Chen, 2007).

Second, the reliability of the identified dimensions was assessed by estimating internal consistency coefficients using Cronbach’s alpha (Nunnally & Bernstein, 1994).

Finally, evidence of construct validity was examined by testing the nomological network of the supervisor feedback environment through its associations with theoretically relevant work-related variables. Specifically, associations were examined with work engagement, assessed using selected items from the Utrecht Work Engagement Scale (Schaufeli & Bakker, 2003; Balducci et al., 2010), job satisfaction measured with a global single-item indicator (Wanous et al., 1997), employee well-being assessed with the WHO-5 Well-Being Index validated in the Italian context (Cedrone et al., 2017), feedback orientation measured with the Italian version of the Feedback Orientation Scale (Lo Piccolo et al., 2025), and perceived workload assessed using the Demands dimension of the short Health and Safety Executive Stress Indicator Tool validated in Italy (Balducci et al., 2015). These associations were examined in line with prior empirical and meta-analytic evidence on the supervisor feedback environment.

Results

Confirmatory factor analysis

Confirmatory factor analyses (CFA) were conducted in R using the lavaan package to test competing measurement models of the Supervisor Feedback Environment Scale, derived from the theoretical framework proposed by Steelman et al. (2004) and consistent with prior validation procedures (e.g., Momotani & Otsuka, 2018). A hypothesized seven-factor model, reflecting the multidimensional structure of the feedback environment, was compared with a more parsimonious one-factor model, which is commonly examined in scale validation studies for comparison purposes.

Because multivariate normality was not supported, as indicated by Mardia’s test, models were estimated using the robust maximum likelihood estimator (MLR).

The one-factor model showed a poor fit to the data (CFI = .73, TLI = .70, RMSEA = .15), indicating that a unidimensional representation of the construct was not supported. In contrast, the second-order seven-factor model demonstrated a substantially better and acceptable fit to the data (CFI = .92, TLI = .89, RMSEA = .09; see Table 1). A chi-square difference test further indicated that the second-order seven-factor model provided a significantly better representation of the data than the one-factor solution, Δχ²(21) = 1035.60, p < .001. Overall, confirmatory factor analyses supported the theoretically grounded second-order seven-factor model over a unidimensional alternative.

Table 1

Comparison of one-factor and seven-factor CFA models

Model

df

CFI

TLI

RMSEA

One-factor

189

.73

.70

.15

Second order

129

.92

.89

.09

Note: CFI = Comparative Fit Index; TLI = Tucker–Lewis Index; RMSEA = Root Mean Square Error of Approximation.

Standardized factor loadings were examined to evaluate the adequacy of the item-factor relationships. Following Tabachnick and Fidell (2013), standardized loadings of .32 or higher were considered meaningful. All items showed adequate loadings on their intended latent factors, with values ranging from .45 to .96. The majority of items exhibited moderate to strong loadings (λ ≥ .60), supporting the adequacy of the proposed measurement model.

Potential cross-loadings were also inspected. Secondary loadings were generally low and consistently smaller than the corresponding primary loadings, indicating that items were clearly associated with their intended factors. This pattern suggests the absence of substantive cross-loading issues and supports the discriminant structure of the measurement model.

Internal consistency was assessed using Cronbach’s alpha. The overall Supervisor Feedback Environment Scale showed excellent reliability (α = .94). This pattern is consistent with Rosen (2006), who reported excellent internal consistency for the short Supervisor Feedback Environment Scale at the overall level (α = .95). The very similar alpha obtained in the present study (α = .94) indicates a virtually identical level of reliability for the global construct. Although Rosen (2006) did not systematically report facet-level alpha coefficients, they documented that the abbreviated scale preserved reliability levels comparable to those of the original version across dimensions. In this respect, the range of subscale alphas observed in the present study was .64-.93 (see Table 2), with most dimensions demonstrating acceptable to excellent internal consistency. Lower reliability was observed for the Source Availability dimension (α = .64). This value is considered acceptable in the context of an initial validation study. As noted, (DeVellis & Thorpe., 2021), reliability coefficients in the .60 range may be regarded as minimally acceptable during early stages of scale development, particularly when constructs are broad, context-dependent, or when preserving content validity is prioritized over internal homogeneity. In this respect, the lower alpha observed for Source Availability likely reflects the situational and structurally contingent nature of perceived access to the supervisor, rather than a psychometric weakness of the scale. Overall, reliability results support the retention of all dimensions in the seven-factor model.

Table 2

Cronbach’s Alpha for Supervisor Feedback Environment Dimensions

Dimension (Supervisor Feedback Environment)

Cronbach’s α

Source Credibility

.85

Feedback Quality

.93

Feedback Delivery

.78

Unfavorable Feedback

.80

Favorable Feedback

.92

Source Availability

.64

Promotes Feedback Seeking

.71

Evidence for nomological validity was examined by testing the associations between the supervisor feedback environment and theoretically relevant work-related variables. As expected, perceptions of a more favorable supervisor feedback environment were positively associated with work engagement (r = .36, p < .001), employee well-being (r = .30, p < .001), and job satisfaction (r = .46, p < .001), supporting the role of the feedback environment as a key contextual resource in the workplace, consistent with prior empirical and meta-analytic evidence (Katz et al., 2021).

In addition, the supervisor feedback environment showed a positive association with feedback orientation (r = .15, p < .01), indicating that a supportive feedback context is related to a greater individual predisposition to seek, accept, and use feedback, while remaining empirically distinct from this dispositional construct. Consistent with the Job Demands-Resources Framework (Demerouti et al., 2001), the supervisor feedback environment was negatively related to perceived workload (r = -.16, p < .01), suggesting that a supportive feedback context may buffer the impact of job demands. Overall, the observed pattern of correlations aligns with theoretical expectations and prior empirical findings, providing support for the nomological validity of the Italian short version of the Supervisor Feedback Environment Scale.

Discussion

The present study examined the psychometric properties of the Italian version of the short Supervisor Feedback Environment Scale developed by Rosen (2006), grounded in the theoretical framework originally proposed by Steelman et al. (2004). Despite the use of this abbreviated measure in international research, evidence regarding its factorial structure and psychometric functioning across cultural contexts has remained limited. Addressing this gap, the current study provides the first preliminary validation evidence of the short Supervisor Feedback Environment Scale in the Italian context.

Overall, the findings offer support for the psychometric adequacy of the Italian short version of the scale. Confirmatory factor analyses clearly favored the theoretically grounded seven-factor model over a unidimensional alternative, replicating the multidimensional structure originally articulated by Steelman et al. (2004) and largely consistent with the abbreviated operationalization proposed by Rosen et al. (2006). This result is consistent with prior validation efforts conducted in other cultural contexts, such as the Japanese adaptation by Momotani and Otsuka (2018), and reinforces the conceptualization of the feedback environment as a multifaceted construct that cannot be adequately captured by a single global dimension.

Regarding model fit, we acknowledge that the RMSEA of the proposed second-order seven-factor model falls in the .08-.09 range. According to the interpretative guidelines by MacCallum et al. (1996), RMSEA values between .08 and .10 indicate a mediocre, yet not unacceptable, level of fit and do not necessarily reflect poor model performance. This interpretation is consistent with evidence showing that RMSEA tends to be more stringent in complex, multifactor item-level models (Marsh et al., 2004), where higher values may emerge despite a theoretically coherent factor structure. Moreover, RMSEA has been shown to vary as a function of model complexity and degrees of freedom, potentially yielding conservative indications even when model constraints are tenable (Chen, 2007). At the same time, the observed RMSEA values call for a cautious interpretation of the results. As this study represents the first examination of the Italian adaptation of the Supervisor Feedback Environment Scale using a second-order measurement model, the present findings should be considered supportive rather than definitive.

Inspection of standardized factor loadings further supported the adequacy of the measurement model. All items loaded meaningfully on their intended latent factors, with the majority showing moderate to strong loadings. Although some items displayed borderline secondary loadings, this pattern is not uncommon in multidimensional self-report measures and is particularly typical of scales including reverse-worded items. Importantly, primary loadings consistently exceeded secondary loadings, indicating that item ambiguity was limited and did not compromise the interpretability of the factor structure.

Regarding reliability, most dimensions demonstrated good to excellent internal consistency, as indicated by Cronbach’s alpha. Lower reliability was observed for the Source Availability dimension, a pattern that is consistent with previous applications of the Feedback Environment Scale using both the full and short versions. This finding likely reflects the limited number of items and the situational nature of this dimension, which captures structurally contingent aspects of access to the supervisor rather than stable evaluative features of the feedback process. As a result, inter-item correlations are expected to be lower for this dimension. The recurrence of this pattern across studies suggests that reduced internal consistency for Source Availability represents a structural characteristic of the construct rather than a culture-specific limitation of the Italian adaptation. Evidence for construct validity was further supported through the examination of the nomological network of the supervisor feedback environment. As theoretically expected, perceptions of a more favorable supervisor feedback environment were positively associated with work engagement, job satisfaction, and employee well-being, and negatively associated with perceived workload, and positively associated with feedback orientation. This pattern aligns closely with prior empirical and meta-analytic evidence indicating that the feedback environment is a relevant relational and contextual job resource (Katz et al., 2021). Notably, the association with feedback orientation supports the view that the feedback environment captures a contextual feature of the workplace that is related to, yet empirically distinct from, individual predispositions toward feedback.

Taken together, these findings contribute to the literature in several ways. From a theoretical perspective, the study provides further support for the cross-cultural applicability of the feedback environment framework, demonstrating that the multidimensional structure articulated by Steelman et al. (2004) and operationalized in the short form by Rosen (2006) can be meaningfully replicated in the Italian public administration context. Importantly, the present study does not introduce a new conceptualization of the feedback environment but rather strengthens the empirical foundation of an existing, widely used operationalization.

From a methodological perspective, the results support the use of the short Supervisor Feedback Environment Scale as a reliable alternative to the full version in applied and field research. In light of increasing constraints on survey length and respondent burden in organizational research, the availability of a validated Italian short form is a valuable resource for researchers and practitioners seeking to assess everyday feedback processes without compromising psychometric quality.

Several limitations should be acknowledged. First, although the sample size was adequate for the confirmatory factor analyses conducted in the present study, it remains relatively modest for a comprehensive scale validation. Future research should therefore seek to replicate these findings in larger and independent samples. Second, the sample was drawn exclusively from the Italian public administration, which may limit the generalizability of the results to other organizational contexts. Replication in private-sector settings and more heterogeneous occupational samples would be valuable to further assess the robustness of the proposed measurement model. Third, although evidence of nomological validity was provided, future studies should extend the validation of the scale by examining predictive validity and temporal stability through longitudinal designs.

From an applied perspective, the availability of an Italian short version of the Supervisor Feedback Environment Scale represents a meaningful contribution. In this respect, the scale may also be useful in counseling and organizational support contexts, where the assessment of everyday supervisory feedback practices can inform individual or group-level interventions aimed at enhancing relational quality and employee well-being.

The brief format makes the instrument particularly suitable for organizational contexts in which survey length and respondent burden constitute relevant constraints, such as large-scale employee surveys, organizational climate assessments, and routine monitoring initiatives. By allowing the assessment of key features of the supervisor feedback environment with a limited number of items while preserving its multidimensional structure, the short scale enables organizations to monitor everyday feedback processes in a systematic and feasible manner.

In conclusion, the present study provides initial evidence supporting the factorial structure, reliability, and construct validity of the Italian version of the short Supervisor Feedback Environment Scale. By offering a formal psychometric examination of a measure that has been widely used but not previously validated in the Italian context, this study contributes to the feedback literature and provides a useful tool for future research on feedback processes, employee well-being, and sustainable performance in organizations.

References

Anseel, F., & Sherf, E. N. (2024). A 25-Year review of research on feedback in Organizations: From simple rules to Complex Realities. Annual Review of Organizational Psychology and Organizational Behavior, 12(1), 19-43. https://doi.org/10.1146/annurev-orgpsych-110622-031927

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3-25. https://doi.org/10.1037/amp0000191

Balducci, C., Fraccaroli, F., & Schaufeli, W. B. (2010). Psychometric properties of the Italian version of the Utrecht Work Engagement Scale (UWES-9). European Journal of Psychological Assessment, 26(2), 143-149. https://doi.org/10.1027/1015-5759/a000020

Balducci, C., Romeo, L., Brondino, M., Lazzarini, G., Benedetti, F., Toderi, S., Fraccaroli, F., & Pasini, M. (2015). The validity of the short UK Health and Safety Executive Stress Indicator Tool for the assessment of the psychosocial work environment in Italy. European Journal of Psychological Assessment, 33(3), 149-157. https://doi.org/10.1027/1015-5759/a000280

Bech, P., Olsen, L. R., Kjoller, M., & Rasmussen, N. K. (2006). Measuring well-being rather than the absence of distress symptoms: a comparison of the SF-36 Mental Health subscale and the WHO-Five well-being scale. International Journal of Methods in Psychiatric Research, 12(2), 85-91. https://doi.org/10.1002/mpr.145

Borden, L., Levy, P. E., & Silverman, S. B. (2017). Leader arrogance and subordinate outcomes: The role of feedback processes. Journal of Business and Psychology, 33(3), 345-364. https://doi.org/10.1007/s10869-017-9501-1

Bowling, N. A., & Hammond, G. D. (2008). A meta-analytic examination of the construct validity of the Michigan Organizational Assessment Questionnaire Job Satisfaction Subscale. Journal of Vocational Behavior, 73(1), 63-77. https://doi.org/10.1016/j.jvb.2008.01.004

Brislin, R. W. (1970). Back-Translation for Cross-Cultural Research. Journal of Cross-Cultural Psychology, 1(3), 185-216. https://doi.org/10.1177/135910457000100301

Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press.

Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods & Research, 21(2), 230-258. https://doi.org/10.1177/0049124192021002005

Cedrone, F., Greco, E., & De Sio, S. (2017). Benessere nei luoghi di lavoro: valutazione della percezione attraverso la somministrazione del questionario WHO-5 Well-being Index. Salute e Società, 3, 136-147. https://doi.org/10.3280/ses2017-su3009

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling a Multidisciplinary Journal, 14(3), 464-504. https://doi.org/10.1080/10705510701301834

Dahling, J. J., Chau, S. L., & O’Malley, A. (2010). Correlates and consequences of feedback orientation in organizations. Journal of Management, 38(2), 531-546. https://doi.org/10.1177/0149206310375467

Dahling, J., O’Malley, A. L., & Chau, S. L. (2015). Effects of feedback motives on inquiry and performance. Journal of Managerial Psychology, 30(2), 199-215. https://doi.org/10.1108/jmp-12-2012-0409

Demerouti, E., Bakker, A. B., Nachreiner, F., & Schaufeli, W. B. (2001). The job demands-resources model of burnout. Journal of Applied Psychology, 86(3), 499-512. https://doi.org/10.1037/0021-9010.86.3.499

DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage Publications.

Elicker, J. D., Cubrich, M., Chen, J. M., Sully de Luque, M., & Gabel-Shemueli, R. (2019). Employee reactions to the feedback environment. In J. R. Williams & L. A. Steelman (Eds.), Feedback in the workplace (pp. 175-192). Springer. https://doi.org/10.1007/978-3-030-30915-2_9

Gallo, J., Walton, A., Shah, N., Halstead, S., & Bryant, C. (2022). Investigating the interaction between the feedback orientation & the feedback environment on employee performance. Journal of Management and Engineering Integration, 15(1), 57-69. https://doi.org/10.62704/10057/24786

Giffin, K. (1967). The contribution of studies of source credibility to a theory of interpersonal trust in the communication process. Psychological Bulletin, 68(2), 104-120. https://doi.org/10.1037/h0024833

Herold, D. M., & Parsons, C. K. (1985). Assessing the feedback environment in work organizations: Development of the job feedback survey. Journal of Applied Psychology, 70(2), 290-305. https://doi.org/10.1037/0021-9010.70.2.290

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling a Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118

Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64(4), 349-371. https://doi.org/10.1037/0021-9010.64.4.349

Katz, I. M., Rauvola, R. S., & Rudolph, C. W. (2021). Feedback environment: A meta‐analysis. International Journal of Selection and Assessment, 29(3-4), 305–325. https://doi.org/10.1111/ijsa.12350

Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). The Guilford Press.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284. https://doi.org/10.1037/0033-2909.119.2.254

Linderbaum, B. A., & Levy, P. E. (2010). The development and validation of the Feedback Orientation Scale (FOS). Journal of Management, 36(6), 1372-1405. https://doi.org/10.1177/0149206310373145

London, M., & Smither, J. W. (2002). Feedback orientation, feedback culture, and the longitudinal performance management process. Human Resource Management Review, 12(1), 81-100. https://doi.org/10.1016/s1053-4822(01)00043-2

Lo Piccolo, E., Mariani, M. G., & Petruzziello, G. (2025). Italian validation of the Feedback Orientation Scale: Psychometric properties and cultural adaptation. Behavioral Sciences, 15(12), 1740. https://doi.org/10.3390/bs15121740

Marsh, H. W., Hau, K., & Wen, Z. (2004). In search of golden rules: Comment on Hypothesis-Testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling a Multidisciplinary Journal, 11(3), 320-341. https://doi.org/10.1207/s15328007sem1103_2

MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130-149. https://doi.org/10.1037/1082-989X.1.2.130

Momotani, H., & Otsuka, Y. (2018). Reliability and validity of the Japanese version of the Feedback Environment Scale (FES-J) for workers. Industrial Health, 57(3), 326–341. https://doi.org/10.2486/indhealth.2018-0019

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. McGraw-Hill.

Paganin, G., & Petruzziello, G. (2025). Italian adaptation of the ultra-short version of the Utrecht Work Engagement Scale (UWES-3): Psychometric properties. Counseling, 18(3), 25–34. https://doi.org/10.14605/CS1832503

Rosen, C. C. (2006). Politics, stress, and exchange perceptions: A dual process model relating organizational politics to employee outcomes (Doctoral dissertation). University of Akron.

Schaufeli, W. B., Shimazu, A., Hakanen, J., Salanova, M., & De Witte, H. (2017). An ultra-short measure for work engagement. European Journal of Psychological Assessment, 35(4), 577-591. https://doi.org/10.1027/1015-5759/a000430

Schaufeli, W. B., & Bakker, A.B. (2003). UWES-Utrecht Work Engagement Scale: Test Manual. Unpublished Manuscript, Department of Psychology, Utrecht University, Utrecht.https://doi.org/10.1037/t07164-000

Schleicher, D. J., Baumann, H. M., Sullivan, D. W., Levy, P. E., Hargrove, D. C., & Barros-Rivera, B. A. (2018). Putting the system into performance management systems: A review and agenda for performance management research. Journal of Management, 44(6), 220-2245. https://doi.org/10.1177/0149206318755303

Stanton, J. M., Sinar, E. F., Balzer, W. K., & Smith, P. C. (2002). Issues and strategies for reducing the length of self‐report scales. Personnel Psychology, 55(1), 167–194. https://doi.org/10.1111/j.1744-6570.2002.tb00108.x

Steelman, L. A., Levy, P. E., & Snell, A. F. (2004). The Feedback Environment Scale: Construct Definition, Measurement, and Validation. Educational and Psychological Measurement, 64(1), 165-184. https://doi.org/10.1177/0013164403258440

Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson.

Toderi, S., Balducci, C., Edwards, J. A., Sarchielli, G., Broccoli, M., & Mancini, G. (2013). Psychometric properties of the UK and Italian versions of the HSE stress indicator tool. European Journal of Psychological Assessment, 29, 72-79. https://doi.org/10.1027/1015-5759/a000280

Wanous, J. P., Reichers, A. E., & Hudy, M. J. (1997). Overall job satisfaction: how good are single-item measures? Journal of Applied Psychology, 82(2), 247-252. https://10.1037/0021-9010.82.2.247

Weston, R., & Gore, P. A., Jr. (2006). A Brief Guide to Structural Equation Modeling. The Counseling Psychologist, 34(5), 719-751. https://doi.org/10.1177/0011000006286345


  1. 1 Dipartimento di Psicologia «Renzo Canestrari» – Alma Mater Studiorum, Università di Bologna, Bologna, Italia.

  2. 2 Dipartimento di Psicologia «Renzo Canestrari» – Alma Mater Studiorum, Università di Bologna, Bologna, Italia.

  3. 3 Dipartimento di Psicologia «Renzo Canestrari» – Alma Mater Studiorum, Università di Bologna, Bologna, Italia.

  4. 4 Department of Psychology «Renzo Canestrari» – Alma Mater Studiorum, University of Bologna, Bologna, Italy.

  5. 5 Department of Psychology «Renzo Canestrari» – Alma Mater Studiorum, University of Bologna, Bologna, Italy.

  6. 6 Department of Psychology «Renzo Canestrari» – Alma Mater Studiorum, University of Bologna, Bologna, Italy.

Vol. 19, Issue 1, February 2026

 

Indietro