[Audio] Good evening my name is Mr. Jayros Villa Amar MSIT student, today I'm going to present the quantitative research.
[Audio] Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Additional to what Sir Ramil discuss before the four types of quantitative research..
[Audio] There are four types of quantitative research. Descriptive, Correlational, Quasi-experimental and Experimental Research.
[Audio] So here in descriptive research it is concerned with describing the nature, characteristics and components of the population or a phenomenon. Descriptive research attempts to find general attributes of the presently existing situation and It aims to define the existing conditions of a classified variable..
[Audio] Correlational Research This quantitative research is designed to define the degree of relationship or association between two or more variables using statistical data without necessarily investigating into causal reasons underlying them. So here in correlation I think we have to use a correlational statistics like pearson, chi-square and other depending of what statistical tool that u have set in your methodology. Correlation research it seek to interpret the relationship between and among a number of facts. It distinguishes tendencies and patterns in data, but it does not go so far in its analysis to prove causes for these observed patterns Research Title Example for Correlational Research: so we have, The Correlation between gender and college course choice, so its something like, we have to know the difference of the choices or the options for the college course between male and female.
[Audio] Quasi-Experimental Research This quantitative research is also known as causal-comparative. it is designed to ascertain cause – effect relationships among variables. So its something like we need to know the cause and the effect and their relationship, so meaning the relationship between the independent and dependent variables Example for Quasi-Experimental Research: " The effects of computer addiction to the academic performance of junior high school students" so in quasi experimental research we do not just simply do a survey or collection of data from participants., What we do here in quasi experimental research is something like we conduct a pre test or post test..
[Audio] Experimental Research This quantitative research is usually termed as "true experimentation. It is designed to prove the cause-effect relationship among a group of variables that make up a study, applying the scientific method. The true experimentation is also considered as a laboratory study but not most of the situation because a laboratory setting may be necessary For Example for Experimental Research :" feasibility study on " aratelis fruit as potential medication for diabetes".
[Audio] I have here a sample research of MSIT student entitled the Multi-channel support and ticketing interface for online support management system platforms.
[Audio] Introduction Information technology ( IT) plays a significant role in all aspects of digital society and challenges the educational system. The increasing role played by information technology changed how we communicated with each other and how we find needed information in our work and daily lives. Customer service is an important competitive lever for the modern firm. At the same time, the continuous evolution and performance improvements in information technology capabilities have enabled the utilization of multichannel service delivery strategies ( Lui and Piccoli, 2016). The multi-channel support and ticketing interface for OSMS platforms is one of the core parts for acceptable service and operation. Gallimore ( 2020) defines "multi-channel support as a combination of two or more channels that companies use to communicate with their end-users." Ozdoruk (2020), also states that "in order to meet customer expectations, companies must provide multi-channel support service" to make customers happy by making them feel appreciated and listened to. Essentially, multi-channel support opens the door for numerous choices to make it convenient for stakeholders to send their feedback or complaints. The goal of multi-channel integration provides a superior customer experience that is consistent and seamless across channels ( Goersch, 2002) to deliver fast, reliable internal customer service, resulting in improved IT department operations and satisfied employees ( Ismaili et al., 2018). The ability to integrate innovative technology into viable learning strategies is beneficial and promising. Stone et al. (2002) enumerate the benefits of the multi-channel support from identification and capture of opportunities for increasing value per customer, increased convenience and improve the experience, increased efficiency through the sharing of processes, technology and information, organizational flexibility, exploiting customer data to identify customer needs indicating new paths for growth, and the ability to switch easily between the various channels, when it suits them and wherever they want to depend on their preference and the type of interaction. From a wider perspective, it is the main part of the service function which collects all the queries from all possible channels in one place So The major concept of this study focuses on the development and evaluation of the multi-channel support and ticketing interface for OSMS platforms at the universities..
[Audio] In Figure. 1 shows the conceptual diagram of how the multichannel support and ticketing interface for OSMS platforms works. The diagram shows the stakeholders will create a complaint via the available primary channel. A technical support agent receives a complaint created then verifies that the problem is real, and not just perceived. The technical support agent will also ensure that enough information about the problem is obtained from the stakeholder. This information generally includes the environment of the stakeholder, how and when the issue occurs, and all other relevant circumstances. The technical support agent creates the issue in the system, entering all relevant data, as provided by the requester. As work is done on that issue, the system is updated with new data by the technical support agent. Any attempt at fixing the problem should be noted in the system. The ticket status most likely will be changed from open to pending. After the issue has been fully addressed, it is marked as resolved in the system. Escalation of the request is done if in case a technical support agent cannot – for whatever reason - handle the service request, in this case, the head of the technical support team will resolve the issues. This study aims to develop a multi-channel support and ticketing interface for OSMS platforms that streamline the entire support requests and converts the stakeholder's queries coming from multiple channels to provide a holistic view of all ticket-related information in one place for universities. Specifically, it seeks to answer the following questions: What are the steps undertaken in the development of the system? What is the difference in the evaluation of the IT instructor, technical support agent & office personnel based on ISO/IEC 25010 software product quality in terms of the following: functional suitability, performance efficiency, compatibility, usability, reliability, security, and maintainability? What is the difference in the evaluation of the developed system among the three groups of respondents? And what are the problems encountered by the respondents during system testing from the perspective of the IT instructor, technical support agent, and office personnel?.
[Audio] MATERIALS AND METHODS For Participants For the initial deployment and testing, the study was conducted at the University of Antique, a state university in Antique founded in the year 1954 and offers courses across various disciplines such as technology, maritime studies, teacher education, computer studies, engineering, architecture, business, and arts and sciences. A total of 65 respondents were included in the study, divided into three groups who are chosen purposively: they are composed of 20 IT instructors, 38 office personnel, and six ( 6) technical support agents..
[Audio] Instruments and Measures The researcher anchors the study on research and development ( R&D) which aims to understand a subject matter more completely and build on the body of knowledge relating to it. The researcher also used the descriptive type of research on the conduct of the study. This method of research involves either identifying the characteristics of an observed phenomenon or exploring possible correlations among two or more phenomena..
[Audio] The research instruments that were used are unstructured interviews, document analysis, and modified survey questionnaire adapted from ISO/IEC 25010 software product quality models had been used for system evaluation ( European standards, 2011). ISO/IEC 25010 software product quality models provide the leading models for assessing software products which define a product quality model composed of eight characteristics (which are further subdivided into sub-characteristics) that relate to static properties of software. The model is applicable to both computer systems and software products. The characteristics and sub-characteristics provide consistent terminology for specifying, measuring, and evaluating system and software product quality. They also provide a set of quality characteristics against which stated quality requirements can be compared for completeness. The frequencies and percentages were employed for each of the items that were categorized and presented in tables with data and results treated using statistical measures: 1. Weighted arithmetic mean was used to weigh the respondent's answer to determine the acceptability of the developed system. The researcher used the five level Likert scale to determine the corresponding descriptive interpretation and was quantified using the scale ranging from " Highly Acceptable" to " Highly Unacceptable". For these analyses, in interpreting the weighted mean, the following scale was used: 4.51- 5.00 (Highly Acceptable), 3.51- 4.50 (Acceptable), 2.51- 3.50 (Moderately Acceptable), 1.51- 2.50 (Unacceptable), and 1.50 and below (Highly Unacceptable). 2. F-Test or One-Way analysis of variance ( ANOVA) was used to determine if there are significant differences in the evaluation of the three respondents on evaluating the developed multi-channel support and ticketing interface for online support management system platforms. The data that was gathered from the answered questionnaires were checked, classified, tabulated, and analyzed according to the research design described in using IBM statistical package for the social science ( SPSS)..
[Audio] Table 1 shows, both technical support agent and IT instructor evaluated the system as "Acceptable" with total mean scores of 4.44 and 4.30, respectively. While the office personnel evaluated the systems as " Highly Acceptable" with a mean score of 4.56 This result implies that as perceived by IT users the system can be integrated and is compatible with the existing programs that they are using. This result is more evident to the office personnel than to the technical support agent and IT instructor. This also suggests that the system possesses a relevant standard and work for which it was intended. Table 2 shows the performance efficiency where IT instructors, technical support agents, and office personnel evaluated the systems as "Acceptable" with the mean scores of 4.05, 3.93, and 4.26, respectively. This means that the respondents are satisfied with what the system can do in a specified condition. So the Performance efficiency has been adapted into the system quality model to assess its capability to exhibit the required performance with regards to the number of resources needed to satisfy the needs of the users in a specified context of use.
[Audio] Table 3 shows the system's compatibility where technical support agents evaluated the systems as " Highly Acceptable" with a mean score of 4.67. On the other hand, both IT instructors and office personnel assess the system as " Acceptable" with their corresponding mean scores of 3.95 and 4.41, respectively. As evaluated by the respondents, the result implies that the system can perform its functions while sharing a common environment without interfering with other systems. This characteristic is clearly perceived by the IT technical support group compare to IT instructors and office personnel. When the respondents evaluated the system's usability as seen in Table 4, it was revealed that both technical support agents and office personnel give a "Highly Acceptable" rating with mean scores of 4.64 and 4.57, respectively. Only the IT instructors give an "Acceptable" rating with a mean score of 4.23. These findings suggest that the system bears an effort need for use and easy to understand. In other words, the system is user-friendly. This is one of the important characteristics of a system. Nielsen ( 2012) added that " usability is a quality attribute that assesses how easy user interfaces are to use and it includes learnability, efficiency, and memorability." Another characteristic that was evaluated was reliability as seen in Table 5. It was found out that the technical support agents, IT instructors, and office personnel give an "Acceptable" rating with their respective mean scores of 4.25, 4.15, and 4.42. This result supports that the system can maintain its level of performance under the specified conditions for a period of time. Pan ( 2019) defines software reliability as the probability of failure-free software operation for a specified period in a specified environment..
[Audio] In terms of systems security, Table 6 revealed that technical support agents, IT instructors, and office personnel perceived that the system has an "Acceptable" rating. This is supported by their mean scores of 4.37, 4.39, and 4.45 It can be deduced that as perceived by the respondents, the system can prevent unauthorized access, whether accidental or deliberative. As evaluated by the respondents shown in Table 7, the technical support agents and IT instructors give an "Acceptable" rating with their corresponding mean scores of 4.10, and 4.29, while office personnel gives a "Highly Acceptable" rating of 4.60. This result confirms that the system possesses analyzability for diagnosing inefficiencies, changeability for modifications, and testability for validating the modified software. Franca and Soares ( 2015) and Crouch ( 2019) added that the maintainability of software products or systems needs to be modified, corrected, or adapted to current changes in the environment without much difficulty..
[Audio] Table 8 presents the overall result of the evaluation of IT instructors, technical support agents, and office personnel. It could be seen on the table that office personnel give a "Highly Acceptable" rating with a mean score of 4.50. While IT instructors and technical support agents give an "Acceptable" rating with an overall mean of 4.18 and 4.36 respectively. The result implies that the newly developed system is more acceptable to the office personnel than IT instructors and technical support agents. This result could be attributed to the fact that IT instructors and technical support agents in general, are more knowledgeable in terms of system development than the office personnel. The result shows that the system complied with the requirements or specifications based on ISO/IEC 25010 software product quality..
[Audio] The result revealed in Table 9 shows that there was a significant difference noted in the evaluations of the respondents, F( 2, 62) = 3.164, p = 0.049. This result implies that the evaluation made by the three groups of respondents significantly differs from one another. To determine which group evaluation significantly differs from the other, Tukey HSD was used as a post hoc test. As shown in Table 10, the evaluation made by the IT instructors significantly differs from the evaluation made by the office personnel. Looking back at the previous results, it can be seen that the newly developed system is more acceptable to the office personnel than IT instructors. This result could be attributed to the fact that IT instructors, in general, are more knowledgeable in terms of system development than the office personnel..
[Audio] Thank u for listening.,. Thank U.