JAYROS VILLA AMAR

Published on Slideshow
Static slideshow
Download PDF version
Download PDF version
Embed video
Share video
Ask about this video

Scene 1 (0s)

JAYROS VILLA AMAR. MSIT STUDENT.

Scene 2 (6s)

[Audio] Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations..

Scene 3 (22s)

Quantitative Research. There are four types of quantitative research. - Descriptive - Correlational - Quasi-experimental - Experimental.

Scene 4 (31s)

[Audio] So here in descriptive research it is concerned with describing the nature, characteristics and components of the population or a phenomenon. Descriptive research attempts to find general attributes of the presently existing situation. It aims to define the existing conditions of a classified variable..

Scene 5 (52s)

[Audio] So here in correlation I think we have to use a correlational statistics noh like pearson, chi-square and other depending of what statistical tool that u have set in your methodology. Correlation research it seek to interpret the relationship between and among a number of facts. it distinguishes tendencies and patterns in data, but it does not go so far in its analysis to prove causes for these observed patterns Example " Correlation between gender and college course choice".

Scene 6 (1m 25s)

[Audio] Example: " the effects of computer addiction to the academic performance of junior high school students" so in quasi experimental research we do not just simply do a survey or collection of data from participants., What we do here in quasi experimental research is something like we conduct a pre test or post test..

Scene 7 (1m 48s)

[Audio] Example :" feasibility study on " aratelis fruit as potential medication for diabetes".

Scene 8 (2m 4s)

Quantitative Research Sample. Multi-channel support and ticketing interface for online support management system platforms.

Scene 9 (2m 13s)

[Audio] The major concept of this study focuses on the development and evaluation of the multi-channel support and ticketing interface for OSMS platforms at the universities..

Scene 10 (3m 19s)

Tickets END-USERS REQUESTERS Errol Fig Chat I COMPLAINTS RESOLVED COMPLAINTS RESOLVED • . 1. Conceptual diagram COMPLAINTS RECEIVED TECHNICAL SUPPORT AGENTS TECHNICAL SUPPORT HEAD.

Scene 11 (3m 27s)

Fig. 1 shows the conceptual diagram of how the multichannel support and ticketing interface for OSMS platforms works. The diagram shows the stakeholders will create a complaint via the available primary channel. A technical support agent receives a complaint created then verifies that the problem is real, and not just perceived. The technical support agent will also ensure that enough information about the problem is obtained from the stakeholder. This information generally includes the environment of the stakeholder, how and when the issue occurs, and all other relevant circumstances. The technical support agent creates the issue in the system, entering all relevant data, as provided by the requester. As work is done on that issue, the system is updated with new data by the technical support agent. Any attempt at fixing the problem should be noted in the system. The ticket status most likely will be changed from open to pending. After the issue has been fully addressed, it is marked as resolved in the system. Escalation of the request is done if in case a technical support agent cannot – for whatever reason - handle the service request, in this case, the head of the technical support team will resolve the issues. This study aims to develop a multi-channel support and ticketing interface for OSMS platforms that streamline the entire support requests and converts the stakeholder’s queries coming from multiple channels to provide a holistic view of all ticket-related information in one place for universities. Specifically, it seeks to answer the following questions: What are the steps undertaken in the development of the system? What is the difference in the evaluation of the IT instructor, technical support agent & office personnel based on ISO/IEC 25010 software product quality in terms of the following: functional suitability, performance efficiency, compatibility, usability, reliability, security, and maintainability? What is the difference in the evaluation of the developed system among the three groups of respondents? And what are the problems encountered by the respondents during system testing from the perspective of the IT instructor, technical support agent, and office personnel?.

Scene 12 (4m 32s)

MATERIALS AND METHODS. Participants For the initial deployment and testing, the study was conducted at the University of Antique, a state university in Antique founded in the year 1954 and offers courses across various disciplines such as technology, maritime studies, teacher education, computer studies, engineering, architecture, business, and arts and sciences. A total of 65 respondents were included in the study, divided into three groups who are chosen purposively: they are composed of 20 IT instructors, 38 office personnel, and six (6) technical support agents..

Scene 13 (4m 57s)

Instruments and Measures. The researcher anchors the study on research and development (R&D) which aims to understand a subject matter more completely and build on the body of knowledge relating to it. The researcher also used the descriptive type of research on the conduct of the study. This method of research involves either identifying the characteristics of an observed phenomenon or exploring possible correlations among two or more phenomena..

Scene 14 (5m 17s)

The research instruments that were used are unstructured interviews, document analysis, and modified survey questionnaire adapted from ISO/IEC 25010 software product quality models had been used for system evaluation (European standards, 2011). ISO/IEC 25010 software product quality models provide the leading models for assessing software products which define a product quality model composed of eight characteristics (which are further subdivided into sub-characteristics) that relate to static properties of software. The model is applicable to both computer systems and software products. The characteristics and sub-characteristics provide consistent terminology for specifying, measuring, and evaluating system and software product quality. They also provide a set of quality characteristics against which stated quality requirements can be compared for completeness. The frequencies and percentages were employed for each of the items that were categorized and presented in tables with data and results treated using statistical measures: Weighted arithmetic mean was used to weigh the respondent's answer to determine the acceptability of the developed system. The researcher used the five level Likert scale to determine the corresponding descriptive interpretation and was quantified using the scale ranging from “Highly Acceptable” to “Highly Unacceptable”. For these analyses, in interpreting the weighted mean, the following scale was used: 4.51- 5.00 (Highly Acceptable), 3.51-4.50 (Acceptable), 2.51-3.50 (Moderately Acceptable), 1.51-2.50 (Unacceptable), and 1.50 and below (Highly Unacceptable). F-Test or One-Way analysis of variance (ANOVA) was used to determine if there are significant differences in the evaluation of the three respondents on evaluating the developed multi-channel support and ticketing interface for online support management system platforms. The data that was gathered from the answered questionnaires were checked, classified, tabulated, and analyzed according to the research design described in using IBM statistical package for the social science (SPSS)..

Scene 15 (6m 22s)

[Audio] Table 1 shows, both technical support agent and IT instructor evaluated the system as "Acceptable" with total mean scores of 4.44 and 4.30, respectively. While the office personnel evaluated the systems as " Highly Acceptable" with a mean score of 4.56 Table 2 shows the performance efficiency where IT instructors, technical support agents, and office personnel evaluated the systems as "Acceptable" with the mean scores of 4.05, 3.93, and 4.26, respectively. This means that the respondents are satisfied with what the system can do in a specified condition..

Scene 16 (7m 7s)

[Audio] Table 3 shows the system's compatibility where technical support agents evaluated the systems as " Highly Acceptable" with a mean score of 4.67. On the other hand, both IT instructors and office personnel assess the system as " Acceptable" with their corresponding mean scores of 3.95 and 4.41, respectively. When the respondents evaluated the system's usability as seen in Table 4, it was revealed that both technical support agents and office personnel give a "Highly Acceptable" rating with mean scores of 4.64 and 4.57, respectively. Only the IT instructors give an "Acceptable" rating with a mean score of 4.23. Another characteristic that was evaluated was reliability as seen in Table 5. It was found out that the technical support agents, IT instructors, and office personnel give an "Acceptable" rating with their respective mean scores of 4.25, 4.15, and 4.42..

Scene 17 (8m 13s)

[Audio] In terms of systems security, Table 6 revealed that technical support agents, IT instructors, and office personnel perceived that the system has an "Acceptable" rating. This is supported by their mean scores of 4.37, 4.39, and 4.45 As evaluated by the respondents shown in Table 7, the technical support agents and IT instructors give an "Acceptable" rating with their corresponding mean scores of 4.10, and 4.29, while office personnel gives a "Highly Acceptable" rating of 4.60..

Scene 18 (8m 51s)

[Audio] Table 8 presents the overall result of the evaluation of IT instructors, technical support agents, and office personnel. It could be seen on the table that office personnel give a "Highly Acceptable" rating with a mean score of 4.50. While IT instructors and technical support agents give an "Acceptable" rating with an overall mean of 4.18 and 4.36 respectively..

Scene 19 (9m 19s)

[Audio] The result revealed in Table 9 shows that there was a significant difference noted in the evaluations of the respondents, F( 2, 62) = 3.164, p = 0.049. This result implies that the evaluation made by the three groups of respondents significantly differs from one another. As shown in Table 10, the evaluation made by the IT instructors significantly differs from the evaluation made by the office personnel. Looking back at the previous results, it can be seen that the newly developed system is more acceptable to the office personnel than IT instructors. This result could be attributed to the fact that IT instructors, in general, are more knowledgeable in terms of system development than the office personnel..

Scene 20 (10m 10s)

[Audio] As shown in Table 10, the evaluation made by the IT instructors significantly differs from the evaluation made by the office personnel. Looking back at the previous results, it can be seen that the newly developed system is more acceptable to the office personnel than IT instructors..