DOI: https://doi.org/10.64010/IVCA2155
Abstract
There is a clear disconnect between the perceptions of workplace readiness among industry leaders and the graduates of business schools. Employers are placing greater emphasis on soft or non-technical skills while concurrently expressing concerns that new hires lack these abilities. Urged by industry, business schools have been experimenting with andragogic approaches to improve the soft skills of their students; however, the results have been largely mixed. While it can be argued that business education has achieved greater awareness of the importance of soft skills relative to student and organizational success; the fact remains that there is little empirical evidence that curricular modifications have resulted in a meaningful improvement in graduates’ skills. This lack of evidence can be largely attributed to the unique challenges associated with defining and assessing soft skills, which is further complicated by the lack of valid and reliable grading rubrics. As one can imagine, assessing soft skills in curricula delivered in an entirely online format adds another layer of complexity. Not only was the online Master of Healthcare Administration (MHA), at Park University, redesigned in a manner that horizontally wove a core set of soft skills throughout the curriculum, it utilizes a structured team approach as the primary means of developing and assessing these skills in our students. This case study will highlight the multiyear journey that led to the development of a set of valid and reliable rubrics and improved soft skills in our graduates.
Introduction
According to Holmberg-Wright and Hrib-ar (2016), “if organizations are to survive in today’s global, information based economy, managers and employees together must master the skills necessary to connect and influence others, maintain relations, and manage and control themselves” (p. 13). This supposition has been affirmed by the Healthcare Leadership Alliance (HLA), which purports that healthcare professionals need to be effective leaders that engage in productive communication and relationship management, while maintaining high levels of professionalism (Gartman, 2006). There is an acknowledgement across sectors that employers view hard and soft skills as equally important (Andrews & Higson, 2008; Bhagra & Sharma, 2018; Jones, Baldi, Phillips, & Waikar, 2016). As a practical matter, skills can be divided into 2 broad categories: technical or hard skills and non-technical or soft skills (Majid, Eapen, Aung, and Oo, 2019). Despite the growing emphasis on soft skills, there is a misalignment between the perceptions of business school alumni and employers with regard to career readiness (Singh and Jaykumar, 2019). Graduates are under the impression that they are well prepared to enter and succeed in the job market, whereas multiple surveys of industry leaders express an opposing viewpoint (Bedwell, Fiore, & Salas, 2014; Andrews & Higson, 2008; Tugend, 2013; Anthony & Garner, 2016; Holmberg-Wright & Hribar, 2016; Pereira, 2013; Ren-ko, Tarabishy, Casrud, & Bannback, 2015; Robinson, 2014; Sousa, 2018). The position of employers can best be summed up in the results from a 2018 Bloomberg Next and Workday survey of senior-level professionals whereby the majority of respondents believe that “new hires are not well-prepared to perform at a high level in a professional environment, primarily because of insufficient soft skills” (Bloomberg Next, 2018, p. 2).
While hard skills remain an important component of professional success, there is a body of literature which stresses the need for business schools to update their curricula in a manner which emphasizes the importance of teamwork and focuses on the soft skills needed to effectively and efficiently execute team activities (Avolio, Benzaquen, & Pretell, 2019; Ritter, Small, Mortimer, & Doll, 2017). There has been a long-held belief by employers that new hires consistently struggle to work well within a team structure (Bhagra & Sharma, 2018; Aquardo, Sanchez-Manzanares, Rico, & Salas, 2014; MacDermott & Ortiz, 2017; Martinez, 2014; Van Schaik, O’Brein, Almeida, & Adler, 2014). In a study conducted by Friedman and Frogner (2010), healthcare leaders identified managers as not at all competent when it comes to engaging employees (71.3%), holding individuals accountable (72.7%), and aligning the behaviors of a team (61.9%).
For those educators looking to strengthen their curricular focus on soft skills, it comes with a number of unique challenges. First, it is difficult to define and assess these skills, which is likely because most qualities attributed to the soft skills are ineffable and difficult to measure (Cobb, Meixelsperger, & Seitz, 2015). Second, there is an instructional trend to move away from physical campuses to online learning platforms, which increasingly rely on asynchronous approaches to interpersonal interaction (Fisher & Newton, 2014). Third, employing instructional modalities like the hybrid format only intermittently permit physical contact between course participants (Fisher & Newton, 2014).
In an effort to develop the soft skills of their students, business programs have incorporated individual courses (Levant, Coulmont, & Sandu, 2016) or a short series of courses that focus on the development of these skills (Ritter, Small, Mortimer, & Doll, 2017); however, these strategies have yet to change the perceptions of industry leaders with regard to graduate readiness for the workplace. Majid, Eapen, Aung, and Oo (2019) and Anthony and Garner (2016) recommend taking an integrated approach to developing soft skills by making certain these abilities are embedded throughout the curriculum and not merely in professional communication courses. In other words, they suggest that academic programs might be better served weaving soft skills throughout their curricula. Based on existing literature, there is a need to examine different andragogic approaches to soft skill development in online curricula with particular attention given to their effectiveness in changing the behaviors of graduate students. The purpose of this case study is to quantitatively determine if horizontally weaving soft skills throughout an online MHA curriculum can improve learning outcomes.
Case Study: Empirical Evidence of Soft Skill Improvement in Park University’s Online Master of Healthcare Administration Program
Situation
Following the successful completion of a comprehensive MHA program review, the faculty noticed an opportunity to develop and strengthen the soft skills of our graduate students. While searching for best practices in soft skill development, we discovered a gap in the literature. Not unlike other graduate programs, the MHA program used a silver bullet approach to soft skill development. This was accomplished mainly through content covered in select courses, like HA517: Legal and Ethical Issues in Healthcare Administration and HA605: Healthcare Organizational Behavior and Leadership. As a result, we did little more than provide our students with intermittent exposure to certain soft skills. The original curriculum emphasized knowledge conveyance. To be candid, it was easier for us to assess one’s ability to regurgitate knowledge, then to demonstrate proficiency in soft skills. We made a concerted effort to locate reliable and valid rubrics specifically designed to assess soft skill proficiency; however, this too represents a gap in the literature.
The program review revealed opportunities for improvement and most notably in the curriculum. The faculty had a choice to make: maintain the status quo, make relatively minor tweaks, or use the findings as a catalyst to completely reinvent the product. We chose the latter and set out to create a truly innovative program of study that prepares students for professional success by ensuring they possess technical and non-technical skills by the time they graduate. This decision began a more than two year journey to redevelop the curriculum.
It was relatively straightforward to augment the existing curriculum with additional hard skills. After all, we had little difficulty operationally defining hard skills and coming up with credible methods of assessment our students’ level of proficiency. What was infinitely more complex was purposefully and meaningfully incorporating “soft skills” into our andragogy. Consistent with the prevailing literature, we experienced three key challenges. Firstly, we could not find a uniform set of definitions for the soft skills considered essential for the success of healthcare administrators. Secondly, we weren’t able to locate valid and reliable assessment instruments designed to assess student proficiency in soft skills. Thirdly, we experienced an added level of complexity due to the MHA program being offered in an entirely online format. To the faculty, these obstacles represented a unique opportunity to innovate and create a paradigm shift in graduate, online instruction.
Identification of Soft Skills
As part of the program review, faculty gathered and analyzed competency data from a variety of sources, including a review of the literature; competency frameworks developed by professional organizations and institutions of higher learning; programmatic accreditors; input from practitioner/scholars; and privately held companies that specialize in providing academic support services. The faculty felt strongly that we should redesign the curriculum using a competency-based framework as a starting point. According to Mangelsdorff (2014), graduate MHA programs looking to develop a competency-based framework should begin with an established competency model with defined behaviors and outcomes. We discovered eight competency models used to characterize successful healthcare administrators. As to be expected, there was a great deal of overlap in the identified hard and soft skill competencies. The faculty considered all eight frameworks; however, five of the models were eliminated from further consideration, in large part, due to their limited use in graduate healthcare administration programs. Below is a brief description of the three competency models that faculty felt needed to be subjected to a higher level of scrutiny.
- Healthcare Leadership Alliance (HLA) developed a competency directory through an inter-organizational collaboration between 6 healthcare management professional organizations: American College of Healthcare Executives (ACHE), American College of Physician Executives (ACPE), American Organization of Nurse Executives (AONE), Healthcare Financial Management Association (HFMA), Healthcare Information and Management Systems Society (HIMSS), and Medical Group Management Association (MGMA) (Stefl, 2008).
- American College of Healthcare Executives (2020), which is the premier professional organization for healthcare administrators and managers, has developed a Competency Assessment Tool comprised of 230 competencies used by its’ membership to assess their knowledge, skills, and abilities (KSA). These competencies are a subset of the competencies contained within the Health-care Leadership Alliance’s (HLA) model.
- Four University model was developed as a collaborative endeavor between Medical University of South Carolina (MUSC), Trinity University, University of Alabama at Birmingham (UAB), and Virginia Commonwealth University (VCU). This competency framework was largely based on the HLA competency model and the ACHE Competencies Assessment Tool (Clement, Hall, O’Connor, Qu, Stefl, & White, 2010).
Since the Four University and ACHE models relied on competencies contained within the HLA Competency Directory, the faculty were inclined to select the HLA framework. Before making a final decision, we wanted to confirm competency alignment with the current needs and expectations of industry leaders and managers; the competencies assessed by privately held companies (e.g., Peregrine Academic Services, LLC); and programmatic accreditors (e.g., ACBSP, AACSB, and IACBE). It was discovered that the competencies expressed by industry professionals and assessed by accreditors and private companies were closely aligned with the HLA model. This resulted in the faculty decision to use the HLA competencies framework to underpin the curricular redevelopment process. We adopted five competency domains included in the HLA Competency Directory: Communication and Relationship Management, Leadership, Professionalism, Knowledge of the Health Care Environment, and Business Skills and Knowledge. For purposes of this study, we will focus on the first three domains, since they aligned with the soft skills eventually chosen by the faculty and reflected in Table 1. The HLA Competency Directory provides a brief description of its five domains and three to eight clusters that fall under each; however, it fell short of providing operational definitions for the competencies that align under the clusters. Table 1 contains a list of the HLA domains, clusters, and the “soft skills” MHA faculty chose to emphasize within the curriculum.

The faculty felt strongly that it would be prudent for the program to restrict the number of soft skills to be assessed within the program. It was believed that the program stood a better chance of strengthening a few core soft skills as opposed to ckling too many and running the risk of only superficially touching on each. To aid us in narrowing down the competencies, we relied mainly on literature featuring the perceived soft skill deficits of new hires as expressed by industry professionals. Once we had a working list of soft skills, we developed operational definitions for each competency, which are provided in Table 2.
With soft skills identified and defined, it was time to determine how best to incorporate these key competencies into the online MHA curriculum. Do we continue with our silver bullet approach or weave these skills throughout the curriculum as horizontal threads?

Horizontal Threads vs. Silver Bullet Approach
There are graduate programs across the country that make a concerted effort to develop the soft skills of their students; however, the results are largely mixed. This conclusion is based, in large part, on the continuing perception that new hires lack essential non-technical skills. Some programs have chosen to include a dedicated course in communications, ethics, or leadership, but haven’t given much thought to reinforcing these learned concepts throughout the remainder of their curricula. Others have students working in teams tackling an assigned project, but may not take the opportunity to provide formative feedback to the team participants throughout the semester/term. They may even require students to deliver a presentation in select courses, but are not consistent in every course. Additionally, faculty are largely left to their own devices when it comes to assessing student knowledge, skills, and abilities (KSA), which creates inconsistency in the assessment of learning outcomes. We considered these types of activities to reflect more of a silver bullet approach to soft skill development and assessment. While they can provide students with a broad brushstroke to these essential competences, it lacks the depth and consistent application necessary to truly change student behaviors (Majid, Eapen, Aung, and Oo, 2019; Anthony and Garner, 2016).
The MHA faculty made a conscious decision to implement a uniform course structure and horizontally weave soft skills throughout the curriculum. Figure 1 provides a graphic representation of the MHA degree with the soft skills traversing the full length of the curriculum. These horizontal threads act as cross braces within the curriculum, strengthening the overall structure and presumably maximizing the likelihood that students will improve their soft skills by graduation.
These skills were purposely incorporated and assessed in designated graded activities within the curriculum. Figure 2 provides details regarding the design of the MHA courses. More specifically, the figure contains the graded activities, when they are assigned

Figure 1. Graphic representation of MHA curriculum over the 8-week term, and their point value. Beneath the figure are notes that provide more detailed information regarding each graded activity, such as their emphasis on industry relevant case scenarios; focus on the development of hard and soft skills; and the production of business-related deliverables.

Figure 2. MHA course structure
1 Students develop a business-related product based on industry relevant cases. This becomes the student’s initial response and an attachment to weekly discussion threads. There is a minimum of two peer responses spread over two different days between Wednesday and Sunday. These responses call for students to provide constructive criticism to their peers’ initial postings with supporting rationale.
2 Students conduct case analyses using an established format (background, key issues, analysis, identification or discussion of alternatives, and recommendation with supporting rationale). When quantitative analyses are required to complete scenario-based problems, such as the case when addressing subject matter related to operations management, finance, economics, and statistics, students are required to perform their computations using Microsoft ® Office Excel ®, and then writing-up their findings and interpretations in a prescribed business deliverable.
3 Faculty assign teams comprised of three to four students with one student being appointed to serve as Project Manager. Students use the collaborative meeting platform Zoom to conduct and record weekly team meetings. Each meeting is setup, coordinated, and recorded by the Project Manager; lasts a minimum of 45 minutes; and requires each team member to be on camera and a live microphone or headset throughout each session. Instructors go back through the previously recorded sessions and provide each student constructive feedback based on their observations.
4 Oral presentations are delivered via Zoom and before a live, remote audience. A formal PowerPoint slide deck is developed and each member of the team is required to deliver a component of the presentation. While one team member is presenting, the others are required to participate in the chat feature and respond to questions and feedback provided by audience participants. This real time interaction with the audience becomes part of the assessment process, since it allows instructors to ascertain each team members’ comfort with components of the presentation that he or she may not have been directly involved in developing and delivering.
5 The team project deliverable is a business-related product in the form of a white paper, business memorandum with or without attachments, business letter with or without enclosures, business email, request for proposals (RFP), action plans (table format), contingency plans (table format), root cause analysis (RCA), needs assessment, policy analysis and development, dashboards, and the like. The projects are substantive in nature and run the full length of the term.
6 Each student is required to complete a self-assessment near the end of the course whereby they assess their perceived level of proficiency in meeting the course learning outcomes.
Development of Soft Skill Rubrics
Since the program intended to focus on specific soft skills, we were not able to locate a commercially available scoring rubrics that met our needs. As a result, we were left with little choice but to create our own instruments. The program chose to employ a fairly common five step process in the development of our rubrics: 1) define the purpose of the assessment; 2) chose the rubric type; 3) define the criteria; 4) design the rating scale; and 5) write the descriptors for each scale point. The purpose of the team meetings is to assess each student’s individual effort and contributions during the sessions. More specifically, faculty want to give team leaders (Project Managers) feedback on their ability to set a directional strategy; and demonstrate integrity; organizational savvy; cooperation; tolerance with ambiguity; adaptability and flexibility; and self-awareness. For team members, we want to assess their advance preparation; willingness to work with others; ability to make positive contributions; manage their time wisely; produce quality work; engage in cooperation; and demonstrate adaptability and flexibility. Becoming an effective and productive member of an organization requires each of these soft skills and is the task chosen for this assessment. Since faculty are assessing and scoring each soft skill, and then summing them up to come up with a composite score, we determined that the design of analytic rubrics was most appropriate. The criteria for assessed team leaders and members are the non-technical skills chosen by the faculty after a careful review of the prevailing literature. These skills are reflected in the purpose for these assessments. We developed a three-tiered rating scale with corresponding point ranges: exceeds expectation (1 to .9), meets expectation (.89 to .8), and doesn’t meet expectation (< .79). The narrative descriptors for each scale point are included in the team leader and

Figure 3. Project Manager (Team Leader) rubric
Analysis and Results
Sample Demographics
To establish validity and reliability of the team leader and member rubrics, faculty drew from more than 7,000 archived team meetings. To be included in this study, students must have graduated from the online MHA program; taken only those courses included in the revamped curriculum; and, for those included in the sample of Project Managers, they must have served in this role at least twice with no less than 1 term between appointments. We pulled two random samples from the online MHA students. The first included eight team members (n = 8) consisting of five (62.5%) males and three (37.5%) females with an average age of 37.25 (SD = 5.092). Fifty percent (50%) of the team members resided in the South, 37.5% in the Midwest, and 12.5% in a foreign country. The team leader sample was comprised of eight students (n = 8) with two (25%) males, six (75%) females, and a mean age of 34 (SD = 10.954). Sixty-two and a half percent (62.5%) of the team leaders resided in the South, 12.5% lived in the East, another 12.5% lived in the Midwest, and the remaining 12.5% lived in the West.

Figure 4. Team member rubric
Validity Testing of Rubrics
To establish content validity for each rubric, the program chose to use the Lawshe (1975) method for mathematically determining the content validity ratio (CVR) of the assessment criteria, and then a content validity index (CVI) for the rubrics. The program recruited 35 subject matter experts (SME) (n = 35) with expertise in learning outcomes assessment and leadership to serve on the Content Evaluation Panel. Once they agreed to serve on the panel, each was sent an email with a link to a SurveyMonkey instrument. They were asked to read each “soft skill” (criterion) and the corresponding descriptors for each scale point, and using a three-tiered Likert scale, let the program know how “essential” they believed each skill and descriptor to be for the team lead or team member rubrics. We also provided a qualitative component to each question, so they could provide supporting rationale for their ratings and offer suggestions for improvement. Armed with this data, the program used the following formula in computing the CVR for each soft skill.

Where:
ne is the number of panelists identifying an item as “essential” and N is the total number of panelists (N/2 is half the total number of panelists).
The CVI is simply the mean of the CVR values for all items meeting the CVR threshold of >.31. For the team lead rubric, the CVI was .526 with all but the “self-awareness” skill (.020) resulting in a statistically significant CVRs given a level of significance of .05 (α = .05). With regard to the team member rubric, and using the same level of significance, the CVI was .673, with all soft skill criterion resulting in a statistically significant critical values (see Tables 3 and 4).
Reliability Testing of Rubrics
To establish inter-rater reliability (IRR), the program computed the Cohen’s kappa (k) and percentage agreement (PA). The k is recommended when using two raters to assess agreement while concurrently correcting for chance agreements amongst nominal data. For this reason, we felt it appropriate to compute the k on both instruments. The PA is a less complicated measure of IRR, and simply determines the percentage of agreement between the ratings of two raters. The formula for computing the PA is (agreements/(agree-ments + disagreements)) X 100% = P%. The team member rubric resulted in a kappa of .41 (k =.41) and a PA of 73.2%, whereas the team lead rubric resulted in a kappa of .40 (k =.40) and a PA of 67.9%.
In order the establish intra-rater reliability, the two raters that were used to complete the inter-rater reliability testing were also tasked with assisting in the intra-rater reliability testing. Each rater was asked to perform two separate assessments of the same meeting recordings for the same sample of students. There was ample time allotted between assessments and a conscious decision to not perform any work on the reliability and validity testing during the period between the two assessment cycles. To analyze the resultant data, we performed a Pearson’s r. In the case of both instruments, there was a strong positive association between each rater’s first and second assessment of the PM’s and team members’ demonstrated soft skills (r = .925 and r = .920 respectively)(See Tables 3 and 4)

Soft Skill Development
To ascertain whether or not the PMs and team members improved their soft skills as they progressed through the online MHA curriculum, we performed a series of independent t-tests (pre- and post-test) using the numeric ranges associated with each scale point. A baseline assessment was conducted using recorded meetings obtained during the introductory course (HA518: Organization of Healthcare Delivery Systems), and then again in a course taken near the end of the students’ program. Two raters were used to assess the soft skills of the sampled students. This was done in an attempt to validate the findings of each rater and strengthen the overall credibility of our analysis. The results revealed statistically significance improvements in the soft skills of those team leaders and members included in the study (see Tables 5 and 6).


Discussion
As a result of our comprehensive program review, we found that our silver bullet approach to addressing soft skill deficiencies was not all that dissimilar from the approaches employed by other graduate programs. Like other programs, our results could be characterized as less than optimally effective. Some students displayed a high level of proficiency whereas others were perceived as being deficient. Unfortunately, we possessed no reliable and valid mechanisms for assessing non-technical skills; instead, we relied largely on subjective observations that may very well have been tainted by instructor bias. The faculty concluded that if we keep doing what we have always done, we will keep getting the results we have always gotten; therefore, a conscious decision was made to change our paradigm and redesign the curriculum to place greater emphasis on soft skills. The curricular redesign was a multiyear endeavor that resulted in the horizontal weaving of key soft skills throughout the curriculum and purposely incorporating them into each course.
Now that the curriculum had been redeveloped, there were two questions the faculty needed answered. First, is it possible to quantitatively assess soft skills? Second, can we provide empirical evidence that the new curriculum has improved the soft skills of on-line graduate students? Only after the redevelopment of the MHA curriculum and its courses were we able to develop grading rubrics to assess the non-technical skills of team leaders and members during their meetings. Even after the development and implementation of the rubrics, it took two years before validity and reliability testing could be conducted. After all, we needed rubric data from students who had only taken courses within the new curriculum. At the completion of the validity testing, we concluded that with the exception of “self-awareness” (.20), which was included in the team leader rubric, all other criteria (soft skills) in both rubrics were determined to be statistically significant at the .05 level of significance (α = .05). Based on the critical values for the CVRs and the CVIs, the faculty were reasonably confident that the instruments were able to measure the behaviors for which they were intended. Based on the qualitative feedback provided by the panelists, we were able to establish face validity for each criterion with the exception of “self-awareness”. The panelist expressed the opinion that self-awareness is a valuable skill; however, they didn’t perceive it as essential for students serving in the capacity of Project Manager.
The inter-rater reliability testing produced results that aligned reasonably well with the literature. A k > .4, represents a “fair” to “good” level of inter-rater reliability (Cicchetti & Sparrow, 1981; Fleiss, 1981; Landis & Koch, 1977; Regier et al., 2012). Efforts were made by the faculty to improve the k, such as using different pairs of raters and modifying rater training; however, the results remains consistent. We are convinced that achieving a “fair” to “good” k is the most likely scenario given the subjective nature of soft skills. The PAs appear consistent with the k for each rubric. With regard to intra-rater reliability, the Pearson’s r was .925 for the team leader rubric and .920 for the member rubric. This was not overly surprising as we figured that each rater was comfortable with their interpretation of the operational definitions associated with each criterion (soft skill), and were able to consistently use their understanding of these definitions when assessing the video recordings of team meetings. The faculty were reasonably comfortable that their assessment findings would remain consistent each time the rubrics were used, in each course, to assess the non-technical of Project Managers and members.
Now that we had established the validity and reliability of the rubrics, we were ready to determine whether or not the curriculum was having a positive impact on the soft skill development of our students. Using dependent t-tests, we were able to determine that the students’ soft skills as Project Managers and team members did improve by the time they graduated. While we do not have any conclusive evidence to support which factors contributed to our results, we suspected that it was a combination of interrelated factors. Students learn which behaviors to emulate when they work closely with their peers in structured teams. The students receive individual instructor feedback using the appropriate rubric and qualitative comments in the form of constructive criticism. This information is intended to be used by students as part of their professional development. Furthermore, faculty noted that the students’ desire to perform well in their courses result in a serious effort to comply with rubric criteria. The fact that a team component is built into each course reinforces the importance of soft skills. If it were not important, then why would we work so diligently on incorporating and assessing these skills throughout the program?
Since this case study was limited to the efforts of MHA faculty, at one modestly sized (17,000 student population), non-profit, Midwestern, 4-year university, and their commitment to boosting their curricular focus on soft skills, the findings cannot be generalized to other colleges or universities. We are unapologetically biased in our realistic portrayal of the many accomplishments that would not have been possible without our faculties’ unwavering commitment to improving the soft skills of our graduate students.
Conclusion
There were two key questions the online MHA program sought to answer. First, is it possible to quantitatively assess soft skills? Second, can we provide empirical evidence that the new curriculum has improved the soft skills of online MHA graduate students? To answer these questions, we took data and information obtained from our comprehensive program review and completely reenvisioned the curriculum. The faculty saw an opportunity to create a paradigm shift in the delivery of online graduate education with greater emphasis being placed on soft skills. We developed a set of grading rubrics to assess our students’ non-technical skills, and then established their validity and reliability. Armed with valid and reliable rubrics, we were able to statistically determine that the soft skills of our students improved by graduation. We hope that the lessons we have learned and the outcomes we have realized will serve as a catalyst for change in colleges and universities that are seriously looking to focus more resources on soft skill development.
References
- Andrews, J., & Higson, H. (2008). Graduate employability, ‘soft skills’ versus ‘hard’ business knowledge: A European study. Higher Education in Europe, 33(4), 411-422. doi: 10.1080/03797720802522627.
- Anthony, S. & Garner, B. (2016). Teaching soft skills to business students: An analysis of multiple pedagogical methods. Business and Professional Communication Quaterly, 79(3), 360-370. doi: 10.1177/2329490616642247
- American College of Healthcare Executives. (2020). ACHE healthcare executive 2020 competencies assessment tool. Retrieved from https://www.ache.org/-/media/ache/career-resource-center/competencies_booklet.pdf
- Aquardo, D., Sanchez-Manzanares, M., Rico, R., & Salas, E. (2014). Teamwork competency test (TWCT): A step forward on measuring teamwork competencies. Group Dynamics, Theory, Research, and Practice, 18(2), 101-121. doi: 10.1037/t33450-000.
- Avolio, B. E., Benzaquen, J. B., & Pretell, C. (2019). Global challenges for business education and the new educational agenda: Graduate attributes and teaching methods. e-Journal of Business Education and Scholarship of Teaching, 13(2), 80-99.
- Bhagra, A., & Sharma, D. K. (2018). Changing paradigm of employability skills in the global business world: A review. The IUP Journal of Soft Skills, 12(2), 7-24. doi: 10.1080/00091383.2018.1540819.
- Bedwell, W. L., Fiore, S. M., & Salas, D. (2014). Developing the future workforce: An approach for integrating interpersonal skills into the MBA classroom. Academy of Management Learning & Education, 13(2), 171-186. doi: 10.5465/amle.2011.0138.
- Bloomberg Next. (2018). Building tomorrow’s talent: Collaboration can close emerging skills gap. Retrieved from http://unitedwayswva. org/wp-content/uploads/2019/07/Building-Tomorrows-Talent-Collaboration-Can-Close-Emerging-Skills-Gap.pdf
- Cicchetti, D. V. & Sparrow, S. S. (1981). Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86(2), 127-137.
- Clement, D. G.; Hall, R. S.; O’Connor, S. J.; Qu, H.; Stefl, M. E.; & White, A. W. (2010). Competency development and validation: A collaborative approach among four graduate programs. The Journal of Health Administration Education, 27(3), 151-173.
- Cobb, E. J., Meixelsperger, J., & Seitz, K. K. (2015). Beyond the classroom: Fostering soft skills in pre-professional LIS organizations.
- Journal of Library Administration, 55(2), 114-120. doi: 10.1080/01930826.2014.995550.
- Fleiss, J. L. (1981). Statistical methods for rates and proportions. (2nd ed.). New York, NY: John Wiley, pp. 38-46.
- Fisher, K., & Newton, C. (2014). Transforming the twenty-first-century campus to enhance the net-generation student learning experience: Using evidence-based design to determine what works and why in virtual/physical teaching spaces.
- Higher Education Research & Development, 33(5), 903–920. doi:10.1080/07294360.2014.890566. Retrieved from https://www.tandfonline.com/doi/full/10.1080/07294360.2014.890566
- Friedman, L. H., & Frogner, B. K. (2010). Are our graduates being provided with the right competencies? Findings from an early careerist skills survey. The Journal of Health Administration Education, 27(4), 269-296.
- Gartman, A. N. (2006). Communication and relationship management. Journal of Healthcare Management, 51(5), 291-291.
- Holmberg-Wright, K., & Hribar, T. (2016). Soft skills – The missing piece for entrepreneurs to grow a business. American Journal of Management, 16(1), 11-18.
- Jones, M., Baldi, C., Phillips, C., & Waikar, A. (2016). The hard truth about soft skills: What recruiters look for in business graduates. College Student Journal, 50(3), 422-428.
- Landis, J. R. & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159-174. doi: 10.2307/2529310.
- Lawshe, C.H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563-575. doi: 10.1111/j.1744-6570.1975.tb01393.x.
- Levant, Y., Coulmont, M., & Sandu, R. (2016). Business simulation as an active learning activity for developing soft skills. Accounting Education, 25(4), 368-395. doi: 10.1080/09639284.2016.1191272
- MacDermott, C., & Ortiz, L. (2017). Beyond the business communication course: A historical perspective on the where, why, and how of soft skills development and job readiness for business graduates. The IUP Journal of Soft Skills, 11(2), 7-24.
- Majid, S., Eapen, C. M., Aung, E. M., & Oo, K. T. (2019). The importance of soft skills for employability and career development: Students and employers’ perspective. The IUP Journal of Soft Skills, 13(4), 7-39.
- Martinez, S. (2014). How to teach and assess the teamwork competency through e-learning. The 10th International Scientific Conference eLearning and Software for Education, Budapest, April 24-25. doi: 10.12753/2066-026X-14-107.
- Mangelsdorff, A. D. (2014). Competency-based curriculum, outcomes, and leader development: Applications to a graduate program in health administration. Journal of Health Administration Education, 31(2), 111-133.
- Pereira, O. (2013). Soft skills: From university to the work environment. Analysis of a survey of graduates in Portugal. Regional and Sectoral Economic Studies, 13(1), 105-118.
- Regier, D. A.; Narrow, W. E.; Clarke, D. E.; Kraemer, H. C.; Kuramoto, S. J.; Kuhl, E. A.; & Kupfer, D. J. (2013). DSM-5 field trials in the United States and Canada, Part II: Test-retest reliability of selected categorical diagnoses. American Journal of Psychiatry, 170(1), 59-70. doi: 10.1176/appi.ajp.201212070999.
- Renko, M., Tarabishy, A. E., Casrud, A. L., & Bannback, M. (2015). Understanding and measuring entrepreneurial leadership style. Journal of Small Business Management, 53(1), 54-74. doi: 10.1111/jsbm.12086.
- Ritter, B. A., Small, E., Mortimer, J. W., & Doll, J. L. (2017). Designing management curriculum for workplace readiness: Developing students’ soft skills. Journal of Management Education, 42(1), 80-103. doi: 10.1177/1052562917703679.
- Robinson, S. (2014). Teaching creativity, teamwork and other soft skills for entrepreneurship. Journal of Entrepreneurship Education, 17(2), 186-197.
- Singh, A. & Jaykumar, P. (2019). On the road to consensus: Key soft skills required for youth employment in the service sector. Worldwide Hospitality and Tourism Themes, 11(1), 10-24. doi: 10.1108/WHATT-10-2018-0066.
- Sousa, M. J. (2018). Entrepreneurship skills development in higher education courses for team leaders. Administrative Sciences, 8(18), 1-15. doi: 10.3390/admsci8020018.
- Stefl, M. E. (2008). Common competencies for all healthcare managers: The Healthcare Leadership Alliance Model. Journal of Health-care Management, 53(6), 360-373. doi: 10.1097/00115514-200811000-00004.
- Tugend, A. (2013, June 28). What it takes to make new college graduates employable. The New York Times. Retrieved from https://www. nytimes.com/2013/06/29/your-money/a-quest-to-make-college-graduates-employable.html
- Van Schaik, S. M., O’Brien, B. C., Almeida, S. A, & Adler, S. R. (2014). Perceptions of interprofessional teamwork in low-acuity settings: A qualitative study. Medical Education, 48, 583-592. doi: 10.1111/medu.12424.l