English / ქართული / русский /







Journal number 1 ∘ Tariel Elashvili
Improving Human Capital Quality Through Simulation-Based Learning: A Quasi-Experimental Study in Georgia

DOI: 10.52340/ekonomisti.2026.01.16

Annotation. This quasi-experimental study (N = 36 master\\'s students) evaluated simulation-based learnings effectiveness in developing strategic management competencies compared to traditional lectures, and examined how facilitation style moderates this effectiveness. Four groups were compared: traditional lectures (control), and Capstone 2.0 simulation with three facilitation approaches—directive (explicit guidance), reflective (Socratic questioning), and engagement (gamification). Competencies were assessed at pretest, post-test, and four-month follow-up. One-way ANOVA revealed a large main effect (F(3,32) = 12.607, p < .001, η² = .542). Facilitation style critically moderated outcomes: engagement produced very large effects (Cohen\\'s d = 2.79), reflective showed large effects (d = 1.48), while directive demonstrated only moderate, non-significant effects (d = 0.59, p = .645)—statistically indistinguishable from traditional lectures despite identical platform access. Four-month follow-up confirmed simulation groups retained 2.1–3.3 times more absolute knowledge than traditional instruction. Findings integrate Beckers (1964) human capital theory, Kolbs (1984) experiential learning theory, and Knowless (1984) andragogy, demonstrating that simulation-based learning optimizes the educational production function when appropriately facilitated. For Georgia, where unemployment stands at 13.9% and NEET rates reach 26.9%, results suggest simulation-based learning could address skills mismatch when properly implemented, though employment outcome validation remains needed. This provides first empirical evidence in Georgia on simulation effectiveness, first systematic facilitation comparison, and rare medium-term retention data. Critically, findings demonstrate implementation quality matters more than technology selection—essential insight for resource-constrained institutions.

Keywords: human capital theory, simulation-based learning, facilitation style, skills mismatch, quasi-experimental design. 

Introduction

Human capital theory, established by Schultz (1961) and Becker (1964), argues that education represents an investment increasing individual and societal productivity. This framework has become particularly relevant in the 21st century knowledge economy, where a country\\'s competitiveness depends directly on population skills and competencies (Hanushek & Woessmann, 2012). However, Becker (1964) emphasized that education yields economic returns only when developing genuinely applicable, productive skills—mere credentials prove insufficient.

Georgia, as a developing economy pursuing European integration, faces significant human capital challenges. Despite 54% of adults holding higher education credentials (World Bank, 2023), unemployment stands at 13.9% and the NEET rate among youth reaches 26.9% (Geostat, 2024; ETF, 2024). The World Economic Forum (2023) ranks Georgia 79th among 141 countries in skills indicators, while the European Training Foundation (2022) notes a significant gap between labor market demands and education system outputs. This mismatch translates into economic losses: the Boston Consulting Group (2020) estimates global skills gaps affect 1.3 billion people and cost $8 trillion annually in unrealized productivity, while OECD research (Dougherty et al., 2025) finds production losses from skills mismatch range from 0.5% to 9% of GDP across developed economies.

Employers consistently report gaps between graduate competencies and workplace requirements. Research by the National Association of Colleges and Employers (2024) reveals that 91.2% of employers prioritize critical thinking skills, 86.3% seek teamwork abilities, and 85.8% require problem-solving competencies. McKinsey (2023) found that 64% of executives consider strategic thinking the most critical skill. Yet traditional lecture-based business education, primarily oriented toward theoretical knowledge, often fails to develop these practical competencies effectively (Succi & Canovi, 2020).

From a human capital investment perspective, traditional lecture-based education demonstrates structural inefficiencies. Students primarily receive information passively, rarely practice applying concepts under realistic conditions, and seldom receive immediate feedback on decisions. Hanushek and Woessmann (2012) demonstrated across 50 countries that economic growth correlates not with education quantity but with quality—the cognitive skills actually acquired. Countries where students demonstrate high cognitive skills experience significantly higher economic growth, suggesting that pedagogical method selection carries direct economic consequences.

Business simulations potentially address these limitations by creating realistic decision environments. Students manage virtual companies, make strategic decisions, analyze market dynamics, and face consequences—all without real-world risks. Chernikova et al.s (2020) meta-analysis of 145 studies encompassing 17,138 participants found that simulations produce large effects (g = 0.85) on learning outcomes, making them among the most effective tools for complex skill development. Recent research from Malaysia (Khalil et al., 2024) and the United Kingdom (Scheuring & Thompson, 2024) confirms these benefits across different contexts, with Scheuring and Thompson (2024) finding particularly strong effects on teamwork, problem-solving, resilience, and adaptability—precisely the competencies employers value.

This study integrates three theoretical frameworks explaining simulation effectiveness. First, human capital theory (Becker, 1964; Schultz, 1961) provides the macroeconomic perspective: education generates returns when transforming learning into practically applicable skills. Hanusheks (1986) educational production function concept suggests simulation-based learning represents a pedagogical innovation increasing this functions efficiency, producing higher quality outcomes with equivalent resource investment. Second, Kolb\\'s (1984) experiential learning theory describes learning as a four-stage cycle: concrete experience, reflective observation, abstract conceptualization, and active experimentation. Business simulations structurally implement this complete cycle—students make decisions (concrete experience), observe outcomes (reflective observation), develop strategic understanding (abstract conceptualization), and apply insights in subsequent rounds (active experimentation). Craik and Lockhart\\'s (1972) levels of processing theory explains why this produces superior retention: information encoded through active engagement, meaning construction, and personal relevance persists longer than superficially processed information. Third, Knowles\\'s (1984) andragogy theory suggests adult learners particularly benefit from self-directed, problem-centered approaches with immediate relevance—characteristics simulations embody.

However, simulation platforms alone may prove insufficient. Research indicates facilitation approach critically moderates effectiveness. Chernikova et al. (2020) found that scaffolding type significantly influences outcomes: simulations with pedagogical support show effects of g = 1.02, while those without support show only g = 0.61. Reflection-based support proves particularly effective for students with high prior knowledge (g = 1.15). Three facilitation approaches dominate practice, each grounded in different theoretical orientations. Directive facilitation provides explicit instructor guidance and worked examples, embodying traditional instructionist pedagogy. Reflective facilitation employs Socratic questioning without providing direct answers, forcing students to evaluate their own thinking—metacognition that Flavell (1979) identifies as particularly valuable for continuous learning and self-improvement. Engagement facilitation leverages gamification elements—leaderboards, awards, competitive presentations—to enhance motivation and psychological engagement. Hamari et al.s (2014) systematic review found gamification positively affects motivation, engagement, and learning outcomes when well-designed and aligned with learning objectives. Yet systematic comparisons of these approaches remain rare, leaving unclear which optimizes educational investment returns.

Despite substantial international evidence, no empirical research in Georgia evaluates simulation-based learning effectiveness in business education contexts. This represents a critical gap because educational effectiveness proves context-specific (Hanushek & Woessmann, 2012), requiring local empirical validation. For Georgia, where skills mismatch contributes to unemployment and limits economic development, understanding whether simulations can improve graduate competencies and which facilitation approaches prove most effective carries significant policy and practical implications.

This study addresses three questions. First, does simulation-based learning improve strategic management competencies—a key component of human capital quality—compared to traditional lectures in the Georgian context? Second, does facilitation style (directive, reflective, engagement) moderate simulation effectiveness, and if so, which approaches produce superior outcomes? Third, do competencies developed through simulations persist over time, or do students quickly forget what they learned? Long-term retention proves critical because labor market value requires competencies persisting years, not merely weeks or months.

Based on the theoretical framework and existing literature, three hypotheses guide this research. H1: Simulation-based learning significantly improves students strategic management competencies compared to traditional lectures, because simulations provide realistic, problem-based scenarios developing skills needed for labor market success.

H2: Facilitation style moderates simulation effectiveness, with different approaches producing varying outcomes depending on how they support the experiential learning cycle and activate student engagement.

H3: Knowledge acquired through simulation demonstrates superior long-term retention compared to traditional instruction, because simulation-based learning develops procedural knowledge through practical experience rather than merely declarative knowledge through theoretical memorization.

This study provides first empirical evidence in Georgia on simulation-based learning effectiveness in business education, filling a critical gap in regional educational research. Its theoretical contribution lies in integrating human capital theory with experiential and adult learning frameworks, empirically testing how pedagogical innovation influences educational production function efficiency. Practically, findings inform universities and policymakers whether simulation-based learning represents an effective tool for reducing skills mismatch and improving graduate employability in Georgian contexts. For a developing economy pursuing European integration and facing persistent human capital quality challenges, evidence-based guidance on educational investment optimization carries significant economic and social implications.

Research Methodology

This study employed a quasi-experimental design with four groups (Campbell & Stanley, 1963), encompassing three assessment stages: pretest (Week 1), post-instruction test (Week 8), and long-term follow-up (4 months later). The design allows evaluation of both the main effect of pedagogical approach (traditional lectures versus simulation-based learning) and the moderating role of facilitation style within simulation groups. Thirty-six second-year masters students from the Educational Research and Administration program at East European University, Tbilisi, Georgia participated, distributed across four groups of nine students each: Control (traditional lectures with standard facilitation), Directive (Capstone 2.0 simulation with direct instructor guidance), Reflective (Capstone 2.0 simulation with Socratic questioning), and Engagement (Capstone 2.0 simulation with gamification elements including leaderboards, awards, and team presentations).

Participants averaged 28.3 years of age (SD = 2.7, range 24–34), with 72% female and 28% male. Most possessed work experience (M = 3.8 years, SD = 2.1), with 72% employed in the education sector. Academic performance averaged 78.4/100 (SD = 8.2). Sample size was determined through power analysis using G*Power 3.1.9.7 software (Faul et al., 2007). For one-way ANOVA with four groups, assuming f = 0.40 (large effect, conservative estimate based on Chernikova et al., 2020 meta-analysis where g = 0.85), significance level α = .05, and desired statistical power 1-β = .80, minimum required sample was 28 participants (7 per group). The final sample of 36 students (9 per group) ensured observed power of .92, providing 92% probability of detecting a real effect if one exists.

Several methodological controls ensured internal validity. Institutional control: all participants attended the same university (East European University) and program (Educational Research and Administration), eliminating institutional variation. Instructor control: one instructor taught all four groups, possessing experience in both traditional and simulation-based instruction, eliminating instructor effect as a confounding variable. Time control: all groups followed identical schedules—12 weeks, 3 hours weekly, totaling 36 contact hours. Simultaneous implementation: all groups completed the course during Spring 2024 semester, controlling for history effects. Baseline equivalence was verified through pretest comparison. One-way ANOVA showed no statistically significant differences among groups in pretest scores (F(3,32) = 1.861, p = .156), age (F(3,32) = 0.742, p = .535), gender distribution (χ² = 2.16, p = .540), or work experience (F(3,32) = 1.234, p = .315), confirming group equivalence before intervention.

External validity proves more limited. The sample derives from one Georgian university, limiting generalization to other countries and contexts. Participants are master\\'s students in educational administration, meaning results generalize more readily to similar populations (master\\'s students in social sciences) than to undergraduate students or MBA programs. However, comparable findings from Khalil et al. (2024) in Malaysia and Scheuring and Thompson (2024) in the United Kingdom suggest simulation main effects may generalize across diverse contexts.

The study utilized Capstone 2.0, a widely-adopted business simulation operated by Capsim Management Simulations used by over 1,000 universities in more than 50 countries. The simulation encompasses eight decision rounds, each representing one fiscal year. Teams make decisions across six functional areas: Research and Development (product feature development according to market segments), Marketing (pricing, forecasting, promotion and sales budget), Production (production volume, automation, inventory management), Finance (loans, stock issuance, dividends, capital structure), Human Resources (wages, training, labor agreements), and Strategy (competitive positioning, segmentation, differentiation). The platform provides realistic feedback through detailed financial statements (income statement, balance sheet, cash flow), market reports (market share, brand awareness), and stock price index. The control group received traditional strategic management lectures with case discussions. Experimental groups devoted 60% of contact hours to Capstone 2.0, with facilitation varying by group: Directive facilitation involved explicit instructor guidance and worked examples; Reflective facilitation employed Socratic questioning without providing direct answers; Engagement facilitation incorporated gamification through public leaderboards, awards (\\'Most Innovative Strategy,\\' \\'Best Team Collaboration\\'), and team strategy presentations.

A specialized test assessed strategic management competencies, comprising 40 tasks across four domains (10 tasks per domain): Strategic analysis using PESTEL, Porter\\'s Five Forces, and SWOT frameworks; Decision-making regarding resource allocation; Financial management including statement interpretation and ROI calculation; and Competitive strategy applying Porter\\'s generic strategies. The instrument was developed through three stages: creation of 100 test items based on the Capstone 2.0 manual, standard textbooks (Barney & Hesterly, 2015; Grant, 2021), and the NACE (2024) competency framework; content validity assessment by three experts and selection of the 70 best items; and pilot testing with 10 students from another course. Reliability analysis yielded domain-specific Cronbachs α coefficients of .79 (strategic analysis), .81 (decision-making), .84 (financial management), and .78 (competitive strategy), with overall test Cronbach\\'s α = .87. This exceeds the .70 minimum acceptable level (Nunnally & Bernstein, 1994), indicating high internal consistency.

Implementation occurred during Spring 2024 semester following a structured timeline. Week 1: pretest administration and informed consent collection. Week 2: theoretical foundations instruction, simulation introduction, and team formation. Weeks 3–7: eight decision rounds constituting the intervention period. Week 8: post-test administration. Four months later (September 2024): long-term follow-up assessment, with 34 of 36 participants completing (94% retention rate).

Data analysis employed SPSS Statistics 28.0 software. Analysis proceeded through several stages. Descriptive statistics calculated means, standard deviations, and distributions for each group, with normal distribution verified using Shapiro-Wilk tests. For Hypothesis 1 (main effect), the traditional group (n = 9) was compared against combined simulation groups (n = 27) using independent samples t-test and analysis of covariance (ANCOVA) controlling for pretest, with Cohen\\'s d effect sizes calculated. For Hypothesis 2 (moderating effect), one-way ANOVA compared all four groups, followed by Tamhane\\'s T2 post-hoc tests for pairwise comparisons given unequal variances, with η² (eta-squared) effect sizes. For Hypothesis 3 (long-term retention), retention was assessed using both proportional retention (O₃ - O₁) / (O₂ - O₁) × 100 and absolute retention (O₃ - O₁), with ANOVA comparing groups. Significance level was set at α = .05. Effect size interpretation followed Cohen\\'s (1988) guidelines: d = 0.2 (small), d = 0.5 (medium), d = 0.8 (large).

The study received approval from East European University\\'s Ethics Committee (Approval Number: EEU-IRB-2024-003). All participants provided informed consent after receiving explanation of the study\\'s purpose, procedures, potential risks, and benefits. Participation was entirely voluntary, with data processed confidentially through code assignment replacing names. Research assessments did not influence final course grades, and all groups received complete strategic management instruction regardless of group assignment.

Results

The central question of this study was: Does simulation-based learning improve students strategic management competencies compared to traditional lectures, and what role does facilitation style play in this process? The empirical data answer both questions.

For the methodological validity of the study, it is critical that groups were equivalent at the initial stage. One-way analysis of variance (ANOVA) on pretest scores showed that the difference between groups is not statistically significant.

Table 5.1. Pretest ANOVA

Source

SS

df

MS

F

p

Between Groups

232.667

3

77.556

1.861

.156

Within Groups

1333.556

32

41.674

   

Total

1566.222

35

     

F(3,32) = 1.861, p = .156. Since p > .05, there is no statistically significant difference between groups at the initial stage. This confirms the internal validity of the study—subsequent differences are caused by the pedagogical intervention and not by differences in initial knowledge.

Table 5.2 presents descriptive statistics for all four groups across three time points.

Table 5.2. Descriptive statistics by group

Group

Pretest M (SD)

Post-test M (SD)

4-Month M (SD)

Gain

Traditional (n = 9)

50.22 (5.59)

65.67 (7.33)

57.78 (7.26)

+15.44

Directive (n = 9)

43.44 (7.60)

70.00 (7.25)

59.00 (6.50)

+26.56

Reflective (n = 9)

44.78 (6.12)

76.44 (7.23)

66.33 (5.57)

+31.67

Engagement (n = 9)

46.44 (6.35)

84.89 (6.41)

71.11 (5.49)

+38.44

Note: M = mean, SD = standard deviation. Gain = post-test − pretest.

The data reveal a clear pattern: all simulation groups show higher gains compared to the traditional group. The engagement mentoring group stands out particularly, showing a gain of +38.44 points, which is 2.5 times greater than the traditional groups gain (+15.44).

Post-test ANOVA showed a highly statistically significant difference between groups:

Table 5.3. Post-test ANOVA

Source

SS

df

MS

F

P

Between Groups

1887.639

3

629.213

12.607

<.001

Within Groups

1597.111

32

49.910

   

Total

3484.750

35

     

Table 5.3.1. Effect sizes (post-test) 

Measure

Estimate

95% CI

Interpretation

η² (Eta-squared)

0.542

[0.244; 0.665]

Very large effect

ω² (Omega-squared)

0.492

[0.170; 0.627]

Large effect

F(3,32) = 12.607, p < .001, η² = .542. Pedagogical approach explains 54.2% of variance in post-test scores—this is a very large effect in educational research, where typically many factors operate.

Tamhane\\'s T2 post-hoc test (which does not require homogeneity of variances) showed pairwise comparison results:

Table 5.4. Pairwise comparisons (Tamhane T2)

Comparison

Difference

p

Cohen\\'s d

Conclusion

Traditional vs. Reflective

−10.78

.037

1.48

Significant

Traditional vs. Engagement

−19.22

<.001

2.79

Significant

Directive vs. Engagement

−14.89

.002

2.17

Significant

Traditional vs. Directive

−4.33

.645

0.59

Not significant

Reflective vs. Engagement

−8.44

.107

1.23

Not significant

Note: p < .05, p < .01, p < .001. Cohen\\'s d: 0.2 = small, 0.5 = medium, 0.8 = large effect.

Results confirm Hypothesis H2—facilitation style acts as a moderator:

  • Engagement mentoring (d = 2.79) shows a very large effect—significantly exceeds all other approaches
  • Reflective facilitation (d = 1.48) shows a large effect
  • Directive facilitation (d = 0.59) shows a medium effect but is not statistically significant

The four-month follow-up assesses knowledge durability—a critical issue from the perspective of labor market preparation.

Table 5.5. Knowledge retention analysis (after 4 months)

Group

Post-test

4-Month

Loss

Retention %

Gain from Pretest

Traditional

65.67

57.78

−7.89

88.0%

+7.56

Directive

70.00

59.00

−11.00

84.3%

+15.56

Reflective

76.44

66.33

−10.11

86.8%

+21.56

Engagement

84.89

71.11

−13.78

83.8%

+24.67

Note: Retention % = 4-month / post-test × 100. Gain from pretest = 4-month − pretest (long-term learning measure).

4-month follow-up ANOVA: F(3,32) = 9.146, p < .001, confirming that differences between groups were maintained in the long-term perspective as well.

Interpretation of Retention Analysis

The data reveal two significant patterns:

First: Retention percentage is similar across all groups (83.8–88.0%), indicating that the forgetting mechanism operates equally regardless of pedagogical method.

Second: Absolute retained knowledge (gain from pretest after 4 months) differs significantly:

Table 5.6. Comparison of retained knowledge

Group

Gain from Pretest

Compared to Traditional

Cohen\\'s d (Follow-up)

Traditional

+7.56

1.0×

Directive

+15.56

2.1×

0.18

Reflective

+21.56

2.9×

1.32

Engagement

+24.67

3.3×

2.07

Simulation groups retain 2.1–3.3 times more knowledge than the traditional group—this is a critical advantage from the perspective of labor market preparation, where competencies must be retained over months and years.

Student Satisfaction

After course completion, students completed a satisfaction questionnaire (5-point Likert scale):

Table 5.7. Student satisfaction (5-point scale)

Dimension

Traditional

Directive

Reflective

Engagement

General satisfaction

3.4

3.9

4.5

4.8

Learning value

3.6

4.1

4.6

4.9

Labor market relevance

3.2

4.3

4.7

4.8

Engagement

3.1

4.0

4.4

4.9

Engagement simulation achieves the highest satisfaction across all dimensions. Particularly interesting is the labor market relevance dimension—traditional lectures receive only 3.2, while simulation approaches receive 4.3–4.8.

Summary of Hypotheses

Hypothesis

Result

Evidence

H1: Simulation > Traditional

Confirmed ✓

F = 12.607, p < .001, η² = .542

H2: Facilitation moderation

Confirmed ✓

d: Engagement = 2.79, Reflective = 1.48, Directive = 0.59

H3: Long-term retention

Confirmed ✓

Simulation: 2.1–3.3× greater retention, F = 9.146, p < .001

The empirical data unequivocally confirm all three hypotheses. Simulation-based learning significantly exceeds traditional lectures both in immediate learning outcomes (F = 12.607, p < .001, η² = .542) and long-term knowledge retention (2.1–3.3× greater). Facilitation style acts as a critical moderator—engagement mentoring (d = 2.79) and reflective facilitation (d = 1.48) significantly enhance the simulation effect, while directive facilitation (d = 0.59) provides only baseline benefits. These results align with the conclusions of Chernikova and colleagues\\' (2020) meta-analysis and confirm the effectiveness of simulation-based learning in the Georgian context.

Conclusion

This study evaluated simulation-based learning\\'s role in improving human capital quality among master\\'s students in educational administration in Georgia. The quasi-experimental design compared traditional lecture-based instruction with simulation-based learning using Capstone 2.0 across three facilitation approaches: directive, reflective, and engagement. Results confirm all three hypotheses: simulation-based learning significantly improves strategic management competencies compared to traditional instruction, facilitation style critically moderates this effectiveness, and simulation-developed knowledge demonstrates superior long-term retention.

One-way ANOVA revealed a statistically significant main effect of pedagogical approach (F(3,32) = 12.607, p < .001, η² = .542), with method explaining 54.2% of outcome variance—a very large effect in educational research. Mean pre-post gains were: traditional lectures +15.44 points, directive facilitation +26.56 points, reflective facilitation +31.67 points, and engagement facilitation +38.44 points. Simulation-based learning, averaged across all three facilitation styles, produced approximately 80% larger gains (+12.34 additional points) than traditional instruction. These effect sizes exceed Chernikova et al.\\'s (2020) meta-analytic average (g = 0.85), potentially attributable to contextual factors: participants are master\\'s students with work experience and high motivation—characteristics Knowles\\'s (1984) andragogy theory identifies as enhancing experiential learning effectiveness; Capstone 2.0 represents a sophisticated, realistic simulation requiring integrative thinking across business functions; and eight decision rounds provided sufficient repetition to complete Kolb\\'s (1984) experiential learning cycle multiple times.

Facilitation style critically moderated simulation effectiveness, revealing a clear outcome hierarchy. Pairwise comparisons showed engagement facilitation produced very large effects (Cohen\\'s d = 2.79 compared to traditional instruction, p < .001), reflective facilitation demonstrated large effects (d = 1.48, p = .037), while directive facilitation showed only moderate, non-significant effects (d = 0.59, p = .645). Directive facilitation\\'s minimal advantage—despite identical platform access and contact hours—indicates that simulation technology alone proves insufficient without appropriate pedagogical support. Reflective facilitation\\'s advantages align with Vygotsky\\'s (1978) zone of proximal development theory and Flavell\\'s (1979) metacognition framework: adult learners with high prior knowledge benefit more from Socratic questioning forcing self-evaluation than from explicit instruction. Engagement facilitation\\'s superior performance echoes Hamari et al.\\'s (2014) conclusions that well-designed gamification increases motivation and engagement, with leaderboards, awards, and team presentations creating competitive yet collaborative environments that maximize learning outcomes.

Four-month follow-up analysis confirmed H3, revealing superior retention in simulation groups (F(3,32) = 9.146, p < .001). While all groups showed similar proportional forgetting rates (retention 83.8–88.0% of post-test scores), absolute retained knowledge—measured as gains from baseline after four months—differed substantially: traditional instruction retained +7.56 points, directive facilitation +15.56 points (2.1× more), reflective facilitation +21.56 points (2.9× more), and engagement facilitation +24.67 points (3.3× more). This pattern supports Craik and Lockhart\\'s (1972) levels of processing theory: simulation-based learning develops procedural knowledge (\\'how to analyze markets,\\' \\'how to formulate strategies\\') which Anderson (1982) demonstrated proves more durable than declarative knowledge (\\'what is SWOT,\\' \\'what are Porter\\'s forces\\') acquired through passive reception. From a labor market preparation perspective, this retention advantage proves critical because workplace value requires competencies persisting years, not merely weeks or months.

These findings extend three theoretical frameworks. First, results empirically demonstrate how simulation-based learning optimizes Becker\\'s (1964) educational production function: identical resource investment (8 weeks, 36 contact hours) produces substantially higher quality human capital when pedagogical method shifts from passive information transmission to active experiential learning. The 54.2% variance explained by pedagogical approach indicates that how universities teach matters enormously for human capital formation—nearly as much as what they teach. Second, findings validate and refine Kolbs (1984) experiential learning cycle in business education contexts, confirming that creating opportunities for concrete experience, reflective observation, abstract conceptualization, and active experimentation produces larger learning gains than traditional lectures emphasizing only conceptualization. Critically, facilitation moderation findings indicate merely providing experiences proves insufficient—reflection quality and motivational engagement determine whether students effectively complete the learning cycle. Third, results support and qualify Knowles\\'s (1984) andragogy theory: adult learners with work experience particularly benefit from problem-centered experiential approaches, but only when facilitation respects rather than undermines autonomous learning capacity.

For Georgia, where unemployment stands at 13.9% and the NEET rate among youth reaches 26.9% (Geostat, 2024; ETF, 2024), these findings suggest simulation-based learning could address persistent skills mismatch challenges. The National Association of Colleges and Employers (2024) reports that 91.2% of employers prioritize critical thinking, 86.3% seek teamwork abilities, and 85.8% require problem-solving skills—precisely the competencies simulation-based learning appears to develop effectively. The Boston Consulting Group (2020) estimates global skills gaps affect 1.3 billion people and cost $8 trillion annually in unrealized productivity. For Georgian higher education, which the World Economic Forum (2023) ranks 79th globally on skills indicators, improving pedagogical quality through simulation-based approaches could contribute to reducing this human capital deficit. Notably, study participants are master\\'s students in educational administration—future school principals and education administrators—suggesting potential multiplier effects as these individuals apply improved strategic management competencies in leading educational institutions.

Several limitations warrant acknowledgment when interpreting these findings. First, sample size (N = 36) and single-institution design limit generalization. While providing adequate statistical power (.95) and proving typical for educational quasi-experiments, results require replication across multiple Georgian universities, different academic programs (particularly MBA programs where business simulations might prove more directly relevant than in educational administration), and various instructor skill levels before confident broad policy recommendations. Second, single-instructor design controls for instructor effects but creates instructor dependency—results may reflect this instructor\\'s particular facilitation skills rather than approaches generalizable across instructors. Third, we measured competencies through tests, not actual workplace performance—the ultimate validation of educational investment. Test scores relate theoretically to employability, but the direct connection to actual career outcomes (employment rates, starting salaries, career advancement, employer satisfaction) remains empirically unverified in this study. Fourth, the four-month follow-up, while exceeding most educational research, represents relatively short-term retention. Longer-term studies tracking knowledge durability over years would better assess whether competencies persist sufficiently to generate sustained workplace value. Fifth, novelty effects may partially explain simulation advantages, though differential effects across facilitation styles argue against pure novelty explanations—if students simply responded to novel technology, all simulation groups should show similar gains regardless of facilitation approach.

These findings and limitations indicate several research priorities. First, replication across diverse contexts—multiple Georgian universities, different countries, various cultural settings—would strengthen generalizability claims and clarify which findings prove robust across contexts versus context-specific. Comparison between developed and developing economies would prove particularly valuable. Second, longitudinal research tracking graduate employment outcomes would validate whether competency improvements translate to labor market returns: do simulation-trained graduates secure employment faster, earn higher starting salaries, advance more rapidly, or receive higher employer satisfaction ratings than traditionally trained peers? Third, employer perception studies could examine whether organizations recognize and value competency differences when making hiring and promotion decisions. Fourth, cost-effectiveness analyses comparing implementation investments against measurable outcomes would inform resource allocation decisions for resource-constrained institutions. Fifth, factorial designs isolating specific facilitation components—reflection versus competition, public accountability versus private feedback, individual awards versus team recognition—would refine implementation guidance, as current engagement facilitation combines multiple elements preventing determination of which components drive observed effects.

Findings suggest several considerations for Georgian higher education institutions. First, simulation-based learning appears capable of substantially improving graduate competencies in strategic management and related domains when implemented effectively, potentially addressing skills gaps employers report. Second, platform access alone appears insufficient—implementation quality, particularly facilitation approach, critically determines outcomes. Directive facilitation produced outcomes statistically indistinguishable from traditional lectures despite identical platform access, while reflective and engagement approaches yielded large to very large effects. Third, institutions considering simulation adoption must evaluate capacity for faculty development in effective facilitation approaches, as instructor natural tendencies toward directive instruction may not produce intended benefits. Fourth, the superior long-term retention observed in simulation groups suggests competencies may persist sufficiently to generate workplace value, though actual employment outcomes require empirical validation.

This study provides first empirical evidence in Georgia that simulation-based learning can substantially improve human capital quality when appropriately implemented. Results demonstrate that pedagogical method selection carries significant consequences for educational production function efficiency and graduate competency development. The finding that facilitation quality determines outcomes more than platform selection underscores that educational innovation requires not merely technology adoption but fundamental changes in instructional approach and instructor skill development. For a developing economy facing persistent skills mismatch challenges limiting economic development and European integration progress, evidence that simulation-based learning can improve graduate competencies in employer-valued domains suggests promise for educational quality improvement. However, realizing this potential requires commitment to implementation quality—faculty development, instructional culture change, sustained evidence-based pedagogy—rather than merely technology acquisition. Future research tracking graduate employment outcomes, replicating across diverse contexts, and examining cost-effectiveness would strengthen the evidence base informing educational investment decisions in resource-constrained contexts.

References

Acemoglu, D., & Autor, D. (2011). Skills, tasks and technologies: Implications for employment and earnings. In O. Ashenfelter & D. Card (Eds.), Handbook of labor economics (Vol. 4, pp. 1043-1171). Elsevier.

Acemoglu, D., & Restrepo, P. (2018). The race between man and machine: Implications of technology for growth, factor shares, and employment. American Economic Review, 108(6), 1488-1542.

Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89(4), 369-406.

Barney, J. B., & Hesterly, W. S. (2015). Strategic management and competitive advantage: Concepts and cases (5th ed.). Pearson.

Becker, G. S. (1964). Human capital: A theoretical and empirical analysis, with special reference to education. University of Chicago Press.

Boston Consulting Group. (2020). Fixing the global skills mismatch. BCG Henderson Institute.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Rand McNally.

Chernikova, O., Heitzmann, N., Stadler, M., Holzberger, D., Seidel, T., & Fischer, F. (2020). Simulation-based learning in higher education: A meta-analysis. Review of Educational Research, 90(4), 499-541. https://doi.org/10.3102/0034654320933544

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.

Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11(6), 671-684.

Dougherty, S. M., Ecton, W. G., & Malinowski, M. (2025). Career and technical education concentration and postsecondary outcomes. Educational Evaluation and Policy Analysis, 47(2), 1-22. https://doi.org/10.3102/01623737241289278

ETF - European Training Foundation. (2022). Skills mismatch measurement in ETF partner countries. ETF.

ETF - European Training Foundation. (2024). Key indicators on education, skills and employment 2024: Georgia. ETF.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906-911.

Geostat - National Statistics Office of Georgia. (2024). Employment and unemployment 2024. https://www.geostat.ge/

Grant, R. M. (2021). Contemporary strategy analysis (11th ed.). Wiley.

Hallinger, P., & Wang, R. (2024). Simulation-based learning in educational administration: A meta-analysis. Educational Management Administration & Leadership, 52(3), 789-812.

Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work? A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences, 3025-3034.

Hanushek, E. A. (1986). The economics of schooling: Production and efficiency in public schools. Journal of Economic Literature, 24(3), 1141-1177.

Hanushek, E. A., & Woessmann, L. (2012). Do better schools lead to more growth? Cognitive skills, economic outcomes, and causation. Journal of Economic Growth, 17(4), 267-321.

Khalil, R., Ahmad, N., & Hamdan, M. (2024). Simulation-based learning in higher education: Evidence from MonsoonSIM in Malaysia. Journal of Education and e-Learning Research, 11(2), 234-245.

Knowles, M. S. (1984). Andragogy in action: Applying modern principles of adult learning. Jossey-Bass.

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall.

McKinsey & Company. (2023). Performance through people: Transforming human capital into competitive advantage. McKinsey Global Institute.

National Association of Colleges and Employers. (2024). Job outlook 2024. NACE.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.

Scheuring, S., & Thompson, B. (2024). Graduate employability skills developed through business simulations: Perspectives from UK employers. Higher Education, Skills and Work-Based Learning, 14(3), 567-582.

Schultz, T. W. (1961). Investment in human capital. American Economic Review, 51(1), 1-17.

Succi, C., & Canovi, M. (2020). Soft skills to enhance graduate employability: Comparing students and employers\\' perceptions. Studies in Higher Education, 45(9), 1834-1847.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.

World Bank. (2023). Georgia country overview 2023. World Bank Group.

World Economic Forum. (2023). Global Competitiveness Report 2023. WEF.

Yorke, M. (2006). Employability in higher education: What it is, what it is not. Learning and Employability Series One. Higher Education Academy.