Latest

whatsapp (+234)07060722008
email sales@graciousnaija.com graciousnaija@gmail.com

Thursday, 23 December 2021

DEVELOPMENT AND VALIDATION OF A STRUCTURED CLINICAL ASSESSMENT TOOL FOR ASSESSING STUDENT NURSES’ CLINICAL COMPETENCE

DEVELOPMENT AND VALIDATION OF A STRUCTURED CLINICAL ASSESSMENT TOOL FOR ASSESSING STUDENT NURSES’ CLINICAL COMPETENCE

ABSTRACT

Assessment of clinical performance contributes to academic qualifications that incorporate professional awards. The administrators of Nursing Schools are facing the problem of subjectivity in practical examination of student nurses. This is evident in examination situations in which the examiner assigns any task of choice to the student and scores the student based on his/her perception of the student’s competence in performing the task. By this, some students are exposed to more difficult tasks than others and subjective scoring, all depending on the inclination of the examiner. In response to this problem, the study developed and validated a Structured Clinical Assessment Tool (SCAT) that will make it possible for all the students to be examined on the same tasks for any examination episode and judged on the same premise. Instrumentation research design was used. One hundred and thirty seven student nurses from three Schools of Nursing in the South East Zone of Nigeria formed the sample for the study. Prior to developing the tool, a competency assessment framework was developed based on the nursing process model with the five steps of the process being the core competencies and sub skills identified for each of the core competencies. The appropriateness of the sub-skills was verified using 52 nurse educators. The care sub-skills were pooled to form the model for SCAT. The model consists of twelve activity stations which are examination points where students perform specified nursing tasks and are scored using a predetermined standard.   Initially 48 items (four per station) and their scoring guide were generated and four experienced nurse educator/managers were used to verify their appropriateness. Thirty six items survived the validation exercise using average congruency percentage. Data collected were analysed using alpha coefficient, t-test and analysis of variance. The results of the analysis confirmed the validity of the 36 items and showed that the items were able to discriminate between the high and low achievers. The high reliability index (0.84-0.99) for most of the procedure station items and moderate reliability index (0.69-0.78) for others confirm that the instrument has a good inter-scorer consistency and therefore is reliable. Based on these findings, the SCAT is a tool that has the potentials for reducing the subjectivity that is inherent in clinical assessments that are based on observation and is therefore recommended for assessing clinical competence of student nurses.

 

Chapter One

Introduction

Background to the Study

Effective administration requires rational decision making which will lead to the selection of the way to reach the anticipated goal. The educational administrator in trying to achieve the ultimate goal of improving learning and learning opportunities to ensure competent products is faced with the responsibility to make decision on such issues as selecting appropriate curriculum, selecting appropriate teaching methods, and selecting appropriate methods for assessing the student’s progress. If appropriate decisions are made on these issues, appropriate educational policies will be made and the goals of education will be met. However, if inappropriate decisions are made, particularly on methods of assessing students, the society will be exposed to the danger of incompetent practice. This is so because learners who have not acquired the necessary knowledge and skills for competent practice may be certified to be qualified to practice and may not give quality and safe care. Generally, the school curriculum is organized to expose students to subjects that provide opportunity for them to acquire the knowledge and skills that should help them practice. Sometimes students who have passed written examination and certified fit to practice fail to do so.  Considering the legal and financial implications of employee performance and safe practice in a rapidly changing environment, a major concern of an educational administrator of an institution should be to produce manpower that is competent. It is therefore important in assessing students for certification to practice, in this case, in a health care institution, to generate appropriate data that will help in making decision on whether they are able to perform tasks that the knowledge they have acquired should help them to accomplish. This can be done if an appropriate assessment tool is in place.

  Stressing the importance of assessing what nursing care providers can do, not what they know, Del Bueno (1990) cited situations in which people who had performed excellently well in examination had difficulty performing a procedure or recognizing warning signs in patients experiencing difficulty. This kind of situation is unacceptable and informed the reforms in nursing education which led to calls for assessment of clinical performance to contribute to academic qualifications that incorporate professional awards. In response to this call, training institutions have developed clinical assessment tools. However, Redfern, Norman, Calman, Watson & Murrels, (2002) expressed some concern about the psychometric quality of the tools that are available and the ability of the tools to distinguish between different levels of practice. They analyzed some tools of assessing competence to practice in nursing, while Norman, Watson, Murrels, Calman, and Redfern (2002) tested selected nursing and midwifery competence assessment tools for reliability and validity. Both team of researchers concluded that a multi-method approach which enhances validity and ensures comprehensive assessment is needed for clinical competence assessment for nursing and midwifery. 

      In order to ensure such a tool, Lenburg (2006) created a constellation of ten basic concepts and suggested that they should be adapted for developing and implementing objective performance examination. They include:

           Concept of examination

           Dimensions of practice

           Required critical elements

           Objectivity of the assessment process

           Sampling critical skills for the testing period

           Level of acceptability

           Comparability in extent, difficulty and requirements

           Consistency in implementation • Flexibility in actual clinical environment • Systematized conditions. 

These concepts are very useful to the development of accurate assessment instruments. Thus far in the nursing context in Nigeria, such tool does not exist. The administrators of nursing schools are facing the problem of subjectivity in practical examination of student nurses. This is evident in situations where students are given different tasks to perform during clinical examination and awarded grades based on the tasks they perform. By this some students are exposed to more difficult tasks than others, all depending on the inclination of the examiner and yet judged on the same maximum score. This is unfair. It is therefore necessary to develop an assessment tool that will examine the students on the same tasks for a particular examination episode. 

In order to accomplish this, consideration should be given to the concepts proposed by Lenburg (2006) which were mentioned earlier. To achieve objectivity in an assessment process two components must be considered. First the content (skills and critical elements) for the particular assessment should be specified in writing and second, there should be a consensual agreement of everyone directly involved in any aspects of the examination process. When individual examiners begin to digress from the established standards and protocols, objectivity erodes back into subjectivity and inconsistency. This regression destroys the process and the purpose.

 To prevent this from occurring, the educational administrator should ensure that the content of the examination is specified by the list of the dimensions of practice, that is, the skills and competencies and their required critical elements that determine the extent and conditions of competence. The use of a conceptual framework to systematically guide the assessment process increases the likelihood that concepts and variables universally salient to nursing and health care practice will be identified and explicated (Waltz, Strickland & Lenz, 2005).

Concepts of interest to nurses and other health professionals are usually difficult to operationalize, that is to render measurable. This is partly because nurses and other health professionals deal with a multiplicity of complex variables in diverse settings, employing a myriad of roles as they collaborate with a variety of others to attain their own and others goals. Hence, the dilemma that they are apt to encounter in measuring concepts is twofold; first; the significant variables to be measured must, by any means, be isolated, and second, very ambiguous and abstract notions must be reduced to a set of concrete behavioural indicators. It is therefore the responsibility of the educational administrator who knows the goals that are intended and that selected the content that should help in the achievement of the goal to select the variables that must be measured and to reduce them to concrete behavioural indicators of competence. These should be incorporated into a protocol that will guide the assessor. Protocols ensure that each test episode for a given group is comparable in extent, difficulty and requirements. Protocol also ensures that the process is implemented consistently, regardless of who administers the examination or when it was conducted. When performance examinations are administered in actual clinical environment, not simulation, the concept of flexibility is essential as each client is different. The responsible educational administrator, who prepares students for professional practice is therefore challenged to develop appropriate competency-based assessment tools for use in the assessment of students’ clinical competence.

Competency-based assessment tool focuses on measuring the actual performance of what a person can do rather than what the person knows. It is based on criterion-referenced assessment methods where the learner’s performance is assessed against a set of criteria provided so that both the learner and assessor are clear on what performance is required. Competency-based assessment technique addresses psychomotor, cognitive and affective domains of learning and its goal is to assess performance for the effective application of knowledge and skill in practice setting. The competencies can be generic to clinical practice in any setting, specific to a clinical specialty, basic or advanced (Benner, 1982; Gurvis & Grey, 1995).

 Criterion-referenced measures are particularly useful in the clinical area when the concern is the measurement of process and outcome variables as applies in nursing. A criterion-referenced measure of process according to Waltz, Strickland & Lenz (2005), requires that one identifies standards or the client care intervention and compares the subjects’ clinical performance with the standard of performance which is the predetermined target behaviour. When all these are taken into consideration in developing a clinical assessment tool, the tool is bound to be authentic.

 

Statement of Problem

 In Nigeria, assessment of clinical performance contributes to the academic qualification for professional award. The Nursing and Midwifery Council of Nigeria (NMCN) has adopted the Objective Structured Clinical Examination (OSCE) for midwifery but has not done the same for general nursing examination. The tool that is currently in use for clinical assessment for the general nursing examination leaves a lot to be desired. It lacks the comparability and consistency that are required to make an assessment tool objective and fair hence the need for a structured clinical assessment tool. Some of the pitfalls of the tool include;

           The tool makes allowance for the selection of the procedure to be performed by the candidate to be made by the assessor and this is varied from one candidate to another. The implication is that all the candidates do not perform the same tasks and the tasks they perform are not comparable and since the task difficulty is not the same for all tasks, the candidates are not examined nor judged on the same premise. This is unfair.

           Another problem that is closely linked with not specifying tasks that all candidates must perform is that the mark allotted to the item, “procedure” is the same for all procedures whether simple or complex and since some candidates are assigned simpler tasks than those assigned to others and are judged on the same optimal score for less work, the tool is unfair. Again, because the activities expected to be carried out for each procedure is not specified, the scoring of the candidates’ performance is based on what the scorer thinks is right and this may vary from one scorer to another. The implication is that most times, the scoring is subjective.

           Sometimes, the length of time required to accomplish a certain task the assessor assigned to a candidate to perform may not allow the assessor opportunity to assess the candidate on all the areas that are listed on the clinical performance assessment guide. Since all the items sum up to give the maximum score, it creates the difficulty of determining what to do about scoring those items particularly as it was not the fault of the candidate that he was not examined in those areas by the particular assessor.

           Again, some of the criteria on which the candidates are judged are not stated in specific terms. For example such statements as “handles patients gently and skillfully” and “adapts the environment for the patient’s comfort” are not specific enough as to what the candidate is expected to do and therefore leaves room for assessor’s subjective conclusions. The implication of all these is that some of the results of assessments using this kind of tool are not valid and may have negative impact on the candidate who failed when actually he/she should have passed and on the consumers of nursing care where a candidate who had not acquired the necessary skills for competent and safe practice passed when he/she should have failed.

In view of this problem, there is the need to develop a clinical assessment tool that is objective and fair. This is the intent in this study. 


DEVELOPMENT AND VALIDATION OF A STRUCTURED CLINICAL ASSESSMENT TOOL FOR ASSESSING STUDENT NURSES’ CLINICAL COMPETENCE


Delivery: Email

No. of Pages: 150

NB: The Complete Masters Project Topics in Educational Management and Policy is well written and ready to use. 

Price: 10,000 NGN
In Stock