Next Article in Journal
A Critical Review of the Modelling Tools for the Reactive Transport of Organic Contaminants
Next Article in Special Issue
Combining Balancing Dataset and SentenceTransformers to Improve Short Answer Grading Performance
Previous Article in Journal
Deep Learning-Driven Public Opinion Analysis on the Weibo Topic about AI Art
Previous Article in Special Issue
Fake News Detection on Social Networks: A Survey
 
 
Article
Peer-Review Record

Text Mining and Multi-Attribute Decision-Making-Based Course Improvement in Massive Open Online Courses

Appl. Sci. 2024, 14(9), 3654; https://doi.org/10.3390/app14093654
by Pei Yang, Ying Liu *, Yuyan Luo *, Zhong Wang and Xiaoli Cai
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2024, 14(9), 3654; https://doi.org/10.3390/app14093654
Submission received: 15 March 2024 / Revised: 20 April 2024 / Accepted: 23 April 2024 / Published: 25 April 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Thank you for the informative and hard worked/detailed paper! Please have in mind the following risks in sentiment analysis using MADM:

1. Selecting attributes can be subjective and may vary depending on the context or domain. Different analysts might choose different attributes, leading to varied results!

2. The weights assigned to attributes might not accurately reflect their importance in determining sentiment, leading to biased results!

3. MADM typically focuses on numerical data and may not capture the complexity of sentiment analysis, especially nuances in language, sarcasm, or cultural context!

4. MADM might face scalability issues when dealing with large datasets or when the number of attributes increases significantly, leading to longer processing times and increased computational resources!

5. MADM techniques might not be easily adaptable to evolving linguistic trends, new sentiment expressions, or changes in language usage patterns, making them less suitable for dynamic sentiment analysis tasks.

 Congrats for your beautiful work!

Author Response

Dear Reviewer,

We’re writing to express our deep gratitude, for that you can give a thoughtful consideration and preview of this manuscript titled “Text Mining and Multi-Attribute Decision Making-based Course Improvement in Massive Open Online Courses”. Since the comments from anonymous experts are of great values in guidance, we have carefully read them all and take these comments in the revision of our manuscript. Hope that the revised manuscript could meet the requirement of the Soft Computing.

In response for the comments, we have made major revisions to our work and provide point-by-point responses to the reviewers, with the revised content denoted by red fonts in this file and blue fonts in the revised manuscript.

Once again, we appreciate your consideration and kind guidance a lot and hope that our responses could solve the questions proposed by the experts and the Soft Computing. Responses to the reviewers’ comments are listed below:

1.Selecting attributes can be subjective and may vary depending on the context or domain. Different analysts might choose different attributes, leading to varied results!

The authors’ answer: Thank you for your valuable and helpful suggestions. In the article, we use word embedding method to extract and classify course attributes. The specific methods are: (1) to extract nouns from online reviews, because the characteristics of the course are generally nouns; (2) Use FastText to vectorize the extracted nouns, because FastText can reasonably assign corresponding vectors to each word; (3)Using AP to cluster nouns represented by vectors, its function is equivalent to K-means, but the AP we use does not need to manually determine the number of categories of clustering, reducing certain subjectivity; (4) Finally, the curriculum attributes are summarized according to the words of each category. This method has been used in many literature, eg Ali et al., (2022)clusters the word vectors generated by the word embedding model. Vargas-Calderón et al. (2021) uses LDA and FastText to transform text reviews into vectors in order to extract service attributes relevant to hotel customers from online hotel reviews. Miranda-Belmonte et al.,(2023) proposes a fast and efficient approach to digital news topic modeling based on semantic clustering and word embedding

Ali, N. M., Alshahrani, A., Alghamdi, A. M., & Novikov, B. (2022). Extracting Prominent Aspects of Online Customer Reviews: A Data-Driven Approach to Big Data Analytics [Article]. Electronics, 11(13), 19, Article 2042. https://doi.org/10.3390/electronics11132042 

Miranda-Belmonte, H. U., Muñiz-Sánchez, V., & Corona, F. (2023). Word embeddings for topic modeling: An application to the estimation of the economic policy uncertainty index [Article]. Expert Systems with Applications, 211, 20, Article 118499. https://doi.org/10.1016/j.eswa.2022.118499 

Vargas-Calderón, V., Ochoa, A. M., Nieto, G. Y. C., & Camargo, J. E. (2021). Machine learning for assessing quality of service in the hospitality sector based on customer reviews [Article]. Information Technology & Tourism, 23(3), 351-379. https://doi.org/10.1007/s40558-021-00207-4 

 

2.The weights assigned to attributes might not accurately reflect their importance in determining sentiment, leading to biased results!

The authors’ answer: Thank you for your valuable and helpful suggestions. In this study, this paper assigns weight to attributes from six dimensions of positive emotion, neutral emotion, negative emotion, no emotion, attention and cost in order to determine the priority of curriculum attribute improvement. This paper takes into account not only the learner's emotion and concern, but also the cost of improvement. Thus, while the weights assigned to attributes may not accurately reflect their importance in determining mood, this approach does provide a comprehensive and systematic analytical framework for the improvement of curriculum attributes. It allows educators to consider learners' emotional experience, attention allocation and the cost of implementing improvement comprehensively in the process of instructional design, so as to formulate more reasonable and effective curriculum optimization strategies. Specifically, through the analysis of positive emotion and negative emotion and other dimensions, educators can have a deeper understanding of learners' emotional experience in the course learning process, identify potential factors that may cause negative emotion, and then adjust teaching strategies and optimize course content to improve learners' learning experience and satisfaction. At the same time, the consideration of attention can help educators determine which curriculum attributes are more likely to attract learners' attention, so as to increase investment in these aspects and improve the teaching effect. The combination of emotion and attention can better identify the factors that affect learners' emotions. For example, when the negative emotion of a certain attribute is high, while the neutral, positive and emotionless attributes are low and the attention is high, it indicates that this attribute is not only attached great importance by people, but also that most learners are dissatisfied with it, indicating that this attribute has led to learners' dissatisfaction with the course to a large extent. Educators need to consider whether to improve this attribute. The consideration of cost enables educators to be more rational and pragmatic in the formulation of improvement plans, so as to avoid excessive pursuit of some aspects of improvement resulting in too high cost and difficult to achieve. Although this method of weight allocation has some subjectivity and uncertainty, it undoubtedly provides a useful reference for the improvement of curriculum attributes. In the future practice, educators can gradually improve this analysis method through continuous trial and error and optimization, so as to make it more close to the actual teaching needs and play a greater role in improving the quality and effect of teaching. So while the weights assigned to attributes may not accurately reflect their importance in determining emotions.

3.MADM typically focuses on numerical data and may not capture the complexity of sentiment analysis, especially nuances in language, sarcasm, or cultural context!

The authors’ answer: Thank you for your valuable and helpful suggestions.Although MADM may not be able to capture the complexity of emotion analysis, this paper has already processed emotion into data in a certain way in the early stage to represent the difference of emotion between course attributes. In addition, there are many previous literatures that combine MADM and sentiment analysis. We have listed some literatures in the revised manuscript, which are as follows: Qin et al.[47] initially extracted product attributes, weight values, and emotional tendencies from online reviews. Subsequently, they employed the Random Multi-criteria Acceptance Analysis (SMAA) - PROMETHEE method to derive product ranking outcomes. Zhang et al.[48] proposed an innovative product selection model that integrates sentiment analysis with the intuitionistic fuzzy TODIM method. This model aims to assist potential customers in ranking alternative products based on consumers' opinions regarding product performance. Liang et al. [42] introduced a quantitative approach for hotel selection leveraging online reviews. Furthermore, they innovatively developed the DL-VIKOR method, which ranks hotels based on customer satisfaction scores and the weights of extracted attributes. Nilashi et al. [49] conducted cluster analysis using self-organizing mapping (SOM) to categorize hotel features. Subsequently, they employed the similarity to Ideal Solution Prioritization technique (TOPSIS) to rank these features. Additionally, neural fuzzy technology was utilized to reveal customer satisfaction levels, providing a comprehensive understanding of hotel performance. Li et al. [46] introduced a novel approach for product ranking that integrates the mining of online reviews with an interval-valued intuitionistic fuzzy technique (TOPSIS). This method aims to assist consumers in selecting products that align with their individual preferences.

4.MADM might face scalability issues when dealing with large datasets or when the number of attributes increases significantly, leading to longer processing times and increased computational resources!

The authors’ answer: Thank you for your valuable and helpful suggestions. Multi-attribute decision analysis (MADM) methods can indeed face scalability issues when dealing with large data sets or a significant increase in the number of attributes. But in general, the number of attributes extracted from online reviews is not very large. Because first of all, online reviews are often targeted and focused. When learners make comments, they tend to frame them around the aspects they care about or feel the most. While these aspects may cover multiple details, they are generally not overly complex, so the number of attributes extracted is relatively limited. Second, the process of extracting attributes is usually accompanied by data cleaning and filtering. Prior to data analysis, we pre-process the collected online comments to remove duplicate, invalid, or irrelevant information and retain the parts that are relevant to the decision analysis. This process helps reduce the number of attributes, making the data more refined and targeted. In addition, with the continuous development of natural language processing (NLP) technology, the methods of extracting attributes are constantly optimized. By using advanced text mining and semantic analysis techniques, key information in online reviews can be more accurately identified, thus extracting more precise and useful attributes. If we encounter large data sets in the future, we will consider using other methods to deal with them.

5.MADM techniques might not be easily adaptable to evolving linguistic trends, new sentiment expressions, or changes in language usage patterns, making them less suitable for dynamic sentiment analysis tasks.

The authors’ answer: Thank you for your valuable and helpful suggestions. MADM may not be suitable for dynamic sentiment analysis tasks, and we may consider replacing MADM methods with other better methods in the future.

We tried our best to improve the manuscript and made some changes marked in blue in revised paper which will not influence the content and framework of the paper. We appreciate for Reviewers’ warm work earnestly, and hope the correction will meet with approval. Once again, thank you very much for your comments and suggestions.

Reviewer 2 Report

Comments and Suggestions for Authors

Abstract needs to be rewritten following Problem understudy, Research methods, Results, Conclusion and Significance.

 

Introduction needs to be strengthen by providing more context and motivation for why improving MOOC course attributes based on learner is an important problem that needs tackling. More work needs to be done in the introduction section on framing the gap and significance.

 

 

 

Related works should include background on multi attribute decision making methods. Also, how these can be applied to other domains. This will provide grounds on the novelty and the work by applying these techniques on the said problem.

 

 

 

Methods, Why the FastText word embedding approach and Affinity Propagation clustering methods were chosen. How the DL architecture for the research was designed for sentiment analysis. How the criteria weights were determined for TOPSIS.

 

 

 

The result section is dense and too many tables with referenced numbers. It would be beneficial for some higher-level summary or more interpretation on key findings for better understanding. Visualization would help to get readers attention.

 

Discussion should provide the implications of results, limitations of the approach and directions for the future work.

 

Conclusion should highlight the key takeaways, contribution and broader impacts.

 

 

References used are 2 from 2023 and 1 from 2024. Kindly add new and latest references by strengthening the literature review. Checked the similarity and found no issues.

 

Comments on the Quality of English Language

Minor editing of English language required

Author Response

Dear Editor and Reviewers,

We’re writing to express our deep gratitude, for that you can give a thoughtful consideration and preview of this manuscript titled “Text Mining and Multi-Attribute Decision Making-based Course Improvement in Massive Open Online Courses”. Since the comments from anonymous experts are of great values in guidance, we have carefully read them all and take these comments in the revision of our manuscript. Hope that the revised manuscript could meet the requirement of the Soft Computing.

In response for the comments, we have made major revisions to our work and provide point-by-point responses to the reviewers, with the revised content denoted by red fonts in this file and blue fonts in the revised manuscript.

Once again, we appreciate your consideration and kind guidance a lot and hope that our responses could solve the questions proposed by the experts and the Soft Computing. Responses to the reviewers’ comments are listed below:

1.Abstract needs to be rewritten following Problem understudy, Research methods, Results, Conclusion and Significance.

The authors’ answer: Thank you for your valuable and helpful suggestions. We have revised the abstract according to your suggestion.

Abstract

As the leading platform of online education, MOOC provides learners with rich course resources, but course designers are still faced with the challenge of how to accurately improve the quality of courses. Current research mainly focuses on learners' emotional feedback on different course attributes, neglecting non-emotional content as well as the costs required to improve these attributes. This limitation makes it difficult for course designers to fully grasp the real needs of learners and to accurately locate the key issues in the course. To overcome the above challenges, this study proposes a MOOC course improvement method based on text mining and multi-attribute decision making. Firstly, we utilize word vectors and clustering techniques to extract course attributes that learners focus on from their comments. Secondly, with the help of some deep learning methods based on BERT, we conduct sentiment analysis on these comments to reveal learners' emotional tendency and non-emotional content towards course attributes. Finally, we adopt the multi-attribute decision-making method TOPSIS to comprehensively consider the emotional score, attention, non-emotional content, and improvement costs of the attributes, providing course designers with a priority ranking for attribute improvement. We applied this method to two typical MOOC programming courses - C language and Java language. The experimental findings demonstrate that our approach effectively identifies course attributes from reviews, assesses learners' satisfaction, attention, and cost of improvement, and ultimately generates a prioritized list of course attributes for improvement. This study provides a new approach for improving the quality of online courses and contributes to the sustainable development of online course quality.

2.Introduction needs to be strengthen by providing more context and motivation for why improving MOOC course attributes based on learner is an important problem that needs tackling. More work needs to be done in the introduction section on framing the gap and significance.

The authors’ answer: Thank you for your valuable and helpful suggestions. We have revised the introduction according to your suggestion. We delve deeply into the robust trend of MOOC in contemporary society, which exhibits an unmistakable and unstoppable momentum. Nevertheless, we also expose a range of issues and challenges encountered by MOOC during their swift development.

Massive Open Online Courses (MOOCs), a significant innovation in teaching technology, offer a diverse array of high-quality open online courses globally[1], effectively overcoming numerous limitations of traditional offline learning in terms of cost, space, and background[2].Simultaneously, the openness of MOOCs presents a novel opportunity for higher education institutions to create a richer and enhanced learning experience for learners through strengthened collaboration in knowledge sharing[3].The emergence of MOOCs has not only facilitated the global sharing of educational resources but has also breathed new life into educational equity and popularization efforts[4].MOOCs are now at the forefront of education[5].Since the inception of the "Year of MOOCs" in 2012, the utilization of MOOCs has been steadily increasing worldwide. This trend has been further accelerated by the COVID-19 pandemic, which prompted the transition of numerous offline courses to online formats. Consequently, people have become increasingly reliant on MOOCs, with millions of new users signing up on their platforms[6].By January 26, 2024, China had launched over 76,800 MOOCs, catering to a staggering 1.277 billion learners within the country. MOOCs are now at the forefront of education [7].

However, as MOOCs have experienced a boom, the rapid surge in the number of courses has simultaneously given rise to the issue of inconsistent quality[8].While some MOOCs are meticulously designed and widely popular among learners, others have received criticism due to their simplistic content and outdated teaching methods. This significant disparity in quality not only impairs the learning experience of learners but also hinders the sustainable development of MOOCs. Consequently, identifying the issues present in MOOCs and offering targeted improvement suggestions to course designers has emerged as a crucial problem that demands urgent attention and resolution.

MOOC enable learners to share their views and perceptions of courses through posted reviews. These reviews encompass a diverse array of learner needs, expectations, and suggestions[9], serving as a rich resource for enhancing course quality. By delving deeply into these comments, we can gain insights into the learners' satisfaction and dissatisfaction levels towards various course attributes[10], enabling us to provide targeted improvement directions to course designers[11], such as the research of Geng et al. [12] and Liu et al.[13].

3.Related works should include background on multi attribute decision making methods. Also, how these can be applied to other domains. This will provide grounds on the novelty and the work by applying these techniques on the said problem.

The authors’ answer: Thank you for your valuable and helpful suggestions. We have revised the related works according to your suggestion. In our related works, we added the research on the ranking of MADM.

2.3 Review the ranking method of multi-attribute decision making

Whether it be a purchase decision in daily life or a task scheduling decision at work, decision-making forms a crucial aspect of people's daily lives and professional endeavors[42]. Some decisions can be relatively straightforward, with minor consequences if a wrong choice is made. However, other decisions can be highly intricate, such that even a slight error can lead to significant repercussions, necessitating a profound and cautious approach. Generally speaking, a decision problem in real life typically involves numerous criteria or attributes that must be taken into account concurrently to arrive at a well-informed decision. The exploration of such challenges is frequently labeled as multi-criterion decision making or multi-attribute decision making, which entails selecting the most suitable option from a finite set of alternatives.

The literature introduces some of the most renowned MADM methods, including PROMETHEE, TODIM, VIKOR and TOPSIS, which are designed to tackle ranking problems effectively[43,44,45,46].Qin et al.[47] initially extracted product attributes, weight values, and emotional tendencies from online reviews. Subsequently, they employed the Random Multi-criteria Acceptance Analysis (SMAA) - PROMETHEE method to derive product ranking outcomes. Zhang et al.[48] proposed an innovative product selection model that integrates sentiment analysis with the intuitionistic fuzzy TODIM method. This model aims to assist potential customers in ranking alternative products based on consumers' opinions regarding product performance. Liang et al. [42] introduced a quantitative approach for hotel selection leveraging online reviews. Furthermore, they innovatively developed the DL-VIKOR method, which ranks hotels based on customer satisfaction scores and the weights of extracted attributes. Nilashi et al. [49] conducted cluster analysis using self-organizing mapping (SOM) to categorize hotel features. Subsequently, they employed the similarity to Ideal Solution Prioritization technique (TOPSIS) to rank these features. Additionally, neural fuzzy technology was utilized to reveal customer satisfaction levels, providing a comprehensive understanding of hotel performance. Li et al. [46] introduced a novel approach for product ranking that integrates the mining of online reviews with an interval-valued intuitionistic fuzzy technique (TOPSIS). This method aims to assist consumers in selecting products that align with their individual preferences.

No single multi-attribute decision analysis method (MADM) can be unilaterally designated as the best or worst, as each has its unique strengths and limitations[50].Each MADM method possesses distinct advantages and limitations, and its effectiveness depends heavily on how it is tailored to specific outcomes and objectives within the planning process. Consequently, the selection of an appropriate MADM model should be guided by specific scenarios and requirements, rather than solely relying on general evaluations. TOPSIS, originally developed by Hwang and Yoon in 1981, is a straightforward ranking method both conceptually and in terms of its application[51].TOPSIS offers three notable benefits: it is comprehensive, requires minimal data, and produces intuitive and easily comprehensible results [46].Given its excellent performance in our investigation[52], the method combining the interval intuitionistic fuzzy set with TOPSIS is chosen as the ranking method for this study.

4.Methods, Why the FastText word embedding approach and Affinity Propagation clustering methods were chosen. How the DL architecture for the research was designed for sentiment analysis. How the criteria weights were determined for TOPSIS.

The authors’ answer: Thank you for your valuable and helpful suggestions. We have revised the methods according to your suggestion. We choose the FastText word embedding method and the Affinity Propagation (AP) clustering model for several reasons. Among the prevalent word embedding techniques, word2vec and FastText stand out as the most representative. FastText, introduced by Facebook researchers in 2016, extends the capabilities of word2vec by incorporating word morphology. This feature allows FastText to represent even words that do not appear during training, assigning a unique vector to each word. Consequently, we opt for this method to assign corresponding vectors to words, ensuring a more accurate capture of their semantic information. On the other hand, Affinity Propagation (AP) clustering serves a similar purpose to K-means clustering. Both aim to group words with similar semantic meanings into distinct classes based on the vectors obtained from the word embedding method. However, the advantage of AP clustering lies in its ability to automatically determine the optimal number of clusters, eliminating the need for manual specification. By combining FastText word embeddings with AP clustering, we can effectively represent words as numerical vectors and then group them based on their semantic similarity, without the need for predefined cluster counts. This approach enhances the accuracy and efficiency of our text processing and analysis tasks.

When utilizing a deep learning model for sentiment analysis, the principal steps involve: (1) Selecting a representative sample of online comments for manual annotation, ensuring a balanced distribution of various categories. Positive comments are labeled as 1, neutral as 0, negative as -1, and non-emotional comments as 2. (2) Dividing the labeled comments into two sets: a training set for the deep learning model and a separate test set for evaluation purposes. (3) Constructing the deep learning model with an embedded layer, intermediate layer, and output layer, and evaluating the accuracy of each model on the test set. (4) Selecting the deep learning model with the highest accuracy to predict the sentiment of the remaining unlabeled comments, thereby leveraging its powerful capabilities for accurate sentiment analysis. The specific steps are as follows:

To identify the emotional and non-emotional content associated with the course attributes, we build a deep learning-based text classification model. This model was trained to categorize review text into four distinct classes: positive, negative, neutral, and non-emotional. For ease of presentation, the classification of a review belonging to these four classes will be collectively referred to as sentiment classification. We use a simple way to determine the course attribute and its corresponding sentiment class in a review: if a review contains a word belongs to an attribute whose sentiment tendency has been determined to be non-emotional, we consider the attribute to be non-emotional in this review. For example, in the sentence “The teacher's mandarin is good”. Obviously, this is a positive review, and if mandarin belongs to the attribute “expressive ability”, we can assume that the emotional tendency of expressive ability in this sentence is positive (see Figure 2). First and foremost, we carefully select a specific number of comments from the vast pool of online feedback to be manually annotated. Positive sentiment comments are labeled as 1, negative sentiment as -1, neutral sentiment as 0, and comments lacking emotional expression are designated as 2. Subsequently, a subset of these annotated comments is utilized to train a deep learning model, while the remaining comments serve as a validation set to assess the model's accuracy. After rigorous testing, the model exhibiting the highest predictive accuracy is chosen to analyze the emotional tendencies of the remaining unlabeled comments.

The pivotal aspect lies in developing a deep learning-based sentiment classification model that can effectively categorize and analyze the sentiment expressed in reviews. The reason why we choose deep learning for sentiment classification is that in recent years, deep learning has gradually become the mainstream model for sentiment classification, and with the help of pre-trained language models such as BERT [55], the accuracy of sentiment classification has also been greatly improved. Deep learning is a machine learning technique rooted in artificial neural networks, requiring the integration of various deep learning modules to construct a complete model. Our deep learning-based sentiment classification model is structured into three distinct modules: the embedding layer, intermediate layer, and output layer.

The TOPSIS method requires the weights of various indicators to be determined beforehand. Therefore, we adopt the entropy weight method (EWM) to determine the weights of each indicator. Since the entropy weight method is an objective weight calculation method that determines the weight of an indicator based on the amount of information, it can reduce the deviation caused by subjective factors when determining the weight of indicators, and make the results more practical. It has been widely used.[62].

5.The result section is dense and too many tables with referenced numbers. It would be beneficial for some higher-level summary or more interpretation on key findings for better understanding. Visualization would help to get readers attention.

The authors’ answer: Thank you for your valuable and helpful suggestions. We have revised the result according to your suggestion. We have deleted some of the less important tables or converted them to graphs. Table 3 is transformed into "The training sample comprised of 572 positive, 125 neutral, 537 negative, and 531 emotionless comments, amounting to a total of 1765 comments." Table 4 was converted to Figure 11. The accuracy of the six classifiers. Tables 5 and 6 are moved to the appendix. The result section has been changed from 10 tables to 6 tables

6.Discussion should provide the implications of results, limitations of the approach and directions for the future work.

The authors’ answer: Thank you for your valuable and helpful suggestions. We have revised the discussion according to your suggestion. We delve into the implications of the result drawn in sections 5.1 and 5.2 of the discussion. Additionally, in the updated section 5.3, we highlight the limitations of the current study and offer insights on the direction for future work.

5.3 Implications and future research

The limitations of this study and potential future research avenues primarily encompass the following aspects. Firstly, sentiment analysis might not be sufficiently precise. The sentence-level sentiment analysis method we employ assumes that the sentiment expressed in a sentence corresponds to the attributes mentioned within it. However, this approach can be limited when a sentence contains multiple attributes and varying emotions. Future research should focus on employing more fine-grained sentiment analysis methods, such as aspect-level sentiment analysis, to enhance the precision of sentiment analysis. Second, the way we determine costs is not necessarily accurate. As our attribute improvement costs are determined through the voting of six experts, this sample size may not be sufficient and introduces a degree of subjectivity, potentially leading to inaccuracies in cost judgments. Therefore, a comprehensive and thorough investigation into the costs associated with course improvements is crucial. Thirdly, we have employed only one method for multi-attribute decision-making, namely TOPSIS. However, there may be other more suitable methods available. It is imperative to explore and discuss whether other multiple attribute decision-making techniques might outperform TOPSIS in certain scenarios. Fourthly, our experiments were limited to only two courses: C language and Java language. Consequently, the experimental results obtained may not be universally applicable. To validate the generalizability of our method, it is necessary to apply it to courses across diverse fields.

7.Conclusion should highlight the key takeaways, contribution and broader impacts.

The authors’ answer: Thank you for your valuable and helpful suggestions.We have revised the conclusion according to your suggestion.We have added new and further contributions to the conclusion.

The main contributions of this study are as follows:

(1)Regarding data selection, this study employs learners' online comments as the analytical dataset to establish a text mining model specifically tailored for MOOC online comments. The outcomes offer a real-time and visual representation of learners' preferences and needs.

(2)Given the limited resources available, realizing their rational utilization holds significant importance in enhancing the overall quality of courses. Previous studies primarily focused on determining the priority of attribute improvements solely based on learners' satisfaction levels, neglecting the crucial aspect of cost. Nevertheless, cost plays a pivotal role in the improvement process of course attributes. In scenarios where the cost of enhancing a specific attribute is prohibitively high, it becomes imperative to deliberate on whether to pursue such improvements despite its low satisfaction rating. Consequently, this study builds upon previous research, incorporating both cost considerations and non-emotional reviews, thereby enabling course managers to undertake a more comprehensive evaluation when aiming to enhance courses.

(3)This study introduces a novel approach that integrates text mining and multi-attribute decision-making frameworks to effectively enhance MOOC course attributes from the learners' perspective. By incorporating non-emotional reviews and considering improvement costs, our method offers a comprehensive and practical solution to course managers. The utilization of deep learning techniques in sentiment analysis ensures accurate categorization of reviews, while the expert voting mechanism provides a reliable estimation of improvement costs. The TOPSIS method then facilitates the prioritization of attributes for improvement, enabling course managers to make informed decisions. This approach not only addresses the challenges associated with managing overwhelming information but also ensures that limited resources are allocated efficiently.

(4)Overall, at the theoretical level, this study contributes to enriching relevant research on MOOC course improvement by adopting a learner-centric perspective. It also serves as a valuable reference for similar studies. From a managerial perspective, the utilization of text mining and multi-attribute decision-making techniques enables a more precise analysis of curriculum attributes, thus addressing the challenge of managing overwhelming information faced by course managers. Furthermore, it offers course managers a more comprehensive framework to prioritize the improvement of course attributes across six dimensions: positive emotion, neutral emotion, negative emotion, no emotion, attention, and cost.

8.References used are 2 from 2023 and 1 from 2024. Kindly add new and latest references by strengthening the literature review. Checked the similarity and found no issues.

The authors’ answer: Thank you for your valuable and helpful suggestions. On the basis of the original references, we have added 7 new papers for 2023 and 4 new papers for 2022.

We tried our best to improve the manuscript and made some changes marked in blue in revised paper which will not influence the content and framework of the paper. We appreciate for Reviewers’ warm work earnestly, and hope the correction will meet with approval. Once again, thank you very much for your comments and suggestions.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Authors have made significant changes based on the comments in the first round. I would still like to see references from 2023 and 2024. Can be published

Author Response

Dear Editor and Reviewers,

We’re writing to express our deep gratitude, for that you can give a thoughtful consideration and preview of this manuscript titled “Text Mining and Multi-Attribute Decision Making-based Course Improvement in Massive Open Online Courses”. Since the comments from anonymous experts are of great values in guidance, we have carefully read them all and take these comments in the revision of our manuscript. Hope that the revised manuscript could meet the requirement of the Soft Computing.

In response for the comments, we have made major revisions to our work and provide point-by-point responses to the reviewers, with the revised content denoted by red fonts in this file and blue fonts in the revised manuscript.

Once again, we appreciate your consideration and kind guidance a lot and hope that our responses could solve the questions proposed by the experts and the Soft Computing. Responses to the reviewers’ comments are listed below:

Authors have made significant changes based on the comments in the first round. I would still like to see references from 2023 and 2024. 

The authors’ answer: Thank you for your valuable and helpful suggestions. On the basis of the original references, we have added 9 new papers for 2023 and 2 new papers for 2024. In total, we have cited 19 references from 2023 and 3 references from 2024.

We tried our best to improve the manuscript and made some changes marked in blue in revised paper which will not influence the content and framework of the paper. We appreciate for Reviewers’ warm work earnestly, and hope the correction will meet with approval. Once again, thank you very much for your comments and suggestions.

Back to TopTop