What are the criteria for validity and reliability in the context of business research?
INTRODUCTION.
Validity
The term validity as used in research means the extent to which the score of the measure of the reflectionrepresent the phenomenon that the researcher is trying to measure. This gives the researcher more strength to carry on the research. Still, on the same, the fact that findings may be reliable does not mean the findings are valid. Various criteria have been used validity (Price, et al. 2015).
Face validity
This is the extent or the degree to which the measurement methods appears on its face be measure construct aimed by the researcher. If we consider the case of the self-esteem in a questionnaire that includes how worth the respondent thinks they are or personal qualities they have may reflect good face validity. Under the same self-esteem questionnaire, finger measuring method may have no means to the self-esteem of the respondent and therefore they may have poor face validity. Face validity is weak evidence of a measurement method that measures what is intended to be measured. This is because it is based on the people’s intuition about persons’ behaviour and feeling towards something which is a very wrong approach or method of measurement. It is assessed quantitively.
Content validity
Content validity refers to the extent to which the measure represents all facet of a construct. It is most often measured by relying on the knowledge of people who are familiar with the construct being measured. These subject-matter experts are usually provided with access to the measurement tool and are asked to provide feedback on how well each question measure the question in measure. Their feedback is then analyzed and then the informed decision is made based on the effectiveness of the question.
Criterion validity
This is a method that is used by the researcher to measure how well the findings of one measure can produce the outcome of another measure. This type of validity is useful where the test is useful in the prediction of the performance of another or the behaviour of another situation. For example, during the interview, the interviewer will be tested on how well he or she can perform the job. If the test based on this is accurate then the interviewer will perform the job. This is said to be criterion validity. It can be any variable that one has to think about and can the correlated with the construct being measured and there will be many of them.
Discriminant validity
This is a type of validity that test the extent to which the score on the measure is not correlating with the measure of variables that are conceptually distinct.
Reliability
Reliability is the degree to which research methods produce stable and consistency results. If the researcher uses the same method to measure the same phenomenon more than once produces the same results then we do say that the results are reliable. It is divided into various categories;
Test-retest reliability
This relates to the measure of reliability that has been obtained by conducting the same test several times more than one over a period of time producing the same results and by using the same group of people. For example, the researcher may tend to give a questionnaire to the group of people in a company about payment satisfaction with the position they are in a company more than two times at an interval of one or two weeks. Meaning that in one or two weeks the questionnaire is filled again. This will test how reliable the results will be by comparing the results filled by the same people at different intervals.
Parallel forms reliability
This is another category of reliability that relates to a measure that the researcher may get by conducting the assessment of the same phenomenon using the same group participation but with more than one assessment method. For example, a researcher may draft a questionnaire to the company based on the payment satisfaction with the position that one is in that company to assess their satisfaction in a company in more than two times but using the same group of the respondent with different method other than the questionnaire such a interview, focus group so that to test how reliable the results of the both might be so that to give the final conclusion based on results.
Inter-rater reliability
This category of research reliability relates to the measure the set of the results obtained by different assessors using the same method. An example is when the employee’s payment satisfaction with the position they are is assessed using the observation method by two assessors. Inter-rater reliability will relate to the difference between the two results gotten.
Internal consistency reliability
The assess to extent difference within the test items that measure the same construct that produce similar results indicates internal consistency reliability. This can be done in two ways; where the average inter-item correlation is a specific form of the internal consistency that is acquired by applying the same construct on each item of the test. The one is split-half reliability where all item of the test is spitted in half.
Reference
Price, P. C., Jhangiani, R., & Chiang, I. C. A. (2015). Research methods in psychology. BCCampus.