Next Article in Journal
A 1.2 V, 92 dB Dynamic-Range Delta-Sigma Modulator Based on an Output Swing-Enhanced Gain-Boost Inverter
Previous Article in Journal
ML-Enhanced Live Video Streaming in Offline Mobile Ad Hoc Networks: An Applied Approach
Previous Article in Special Issue
Multi-Dimensional Moving Target Defense Method Based on Adaptive Simulated Annealing Genetic Algorithm
 
 
Article
Peer-Review Record

Advanced Algorithmic Approaches for Scam Profile Detection on Instagram

Electronics 2024, 13(8), 1571; https://doi.org/10.3390/electronics13081571
by Biodoumoye George Bokolo * and Qingzhong Liu
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2024, 13(8), 1571; https://doi.org/10.3390/electronics13081571
Submission received: 28 February 2024 / Revised: 15 April 2024 / Accepted: 16 April 2024 / Published: 19 April 2024
(This article belongs to the Special Issue Cyber Attacks: Threats and Security Solutions)

Round 1

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

The paper presents efficacy of various machine learning Algorithms. The work ensemble methods, particularly XGBoost and Gradient Boosting achieving an accuracy up to 91%.

The related work is very short and there no comparison of methods with other published machine learning

 

Algorithms that claim very high accuracy without proper scientific evidence

 

This paper claims very high accuracy without justifying and comparison other published machine learning methods.

 

I advise author to compare their method, revise, experimental methods and resubmit the paper. Related work must be enhanced.

 

 

 

Comments on the Quality of English Language

Moderate English corrections are required

Author Response

 

After receiving the reviews on my previously submitted paper, I immediately began addressing the comments. First, I was asked to improve upon the introduction and the references cited in my work in two of the three reviews I received. To address this, I rewrote the introduction to the paper and ensured all cited papers were integral to the point I was trying to make. Regarding the research design, I revamped my research methodology design by replacing the dataset with a much larger and more comprehensive dataset. I then improved the description and justification of my research methods, thus addressing the stipulated need to enhance my research design.

 

I improved upon the description of my research methods by providing a rundown of all the machine learning models used in my research, explaining what they are and why they were selected as part of the suite of assessed models in the paper. I improved upon the presentation of my results by creating a line graph that better illustrates the research findings and complements the table I had made to present the research results. In one of the reviews, I was also asked to improve my conclusion by ensuring the research results support it. I ensured this by clearly inferring my conclusions from the data observed from my research findings and presented via the table and graph.

 

Finally, reviewer 1 commented that I needed to back up the high accuracy scores I recorded with some of the assessed models with proper scientific evidence. I was also asked to improve upon my review of related works. To address this, I deleted my previous literature review section and wrote a new one using the information I gleaned from recent and nuanced articles related to my research topic. I chose research papers reviewed in this work based on their recency and relevance to my research goals. To back up the accuracy scores and other performance metrics recorded in my work, I ensured I was clear in the description of my research methodology - from data collection and preprocessing to model development and evaluation. In the discussion section, I also compared my findings with those from some of the other related works I reviewed, showing how the other papers' conclusions backed up my findings.

 

I need to reiterate that I duly appreciate the review feedback I got from my previous submission, but rest assured that I have taken these feedback to heart and worked on them earnestly. As regards highlighting the change made in the paper, I would like to state that the paper has almost been entirely rewritten - from the introduction through to the discussion section.  This was mainly to ensure that I addressed all the comments from the earlier review. As such, there is no highlighted text in the newly submitted version, as I believe it counter-intuitive then to highlight over 90% of the submitted text.

 

I believe the changes I’ve made addressed your feedback, and I look forward to your positive response. Thank you.

Reviewer 2 Report (Previous Reviewer 2)

Comments and Suggestions for Authors

Well addressed all previous reviewer comments.

Author Response

 

After receiving the reviews on my previously submitted paper, I immediately began addressing the comments. First, I was asked to improve upon the introduction and the references cited in my work in two of the three reviews I received. To address this, I rewrote the introduction to the paper and ensured all cited papers were integral to the point I was trying to make. Regarding the research design, I revamped my research methodology design by replacing the dataset with a much larger and more comprehensive dataset. I then improved the description and justification of my research methods, thus addressing the stipulated need to enhance my research design.

 

I improved upon the description of my research methods by providing a rundown of all the machine learning models used in my research, explaining what they are and why they were selected as part of the suite of assessed models in the paper. I improved upon the presentation of my results by creating a line graph that better illustrates the research findings and complements the table I had made to present the research results. In one of the reviews, I was also asked to improve my conclusion by ensuring the research results support it. I ensured this by clearly inferring my conclusions from the data observed from my research findings and presented via the table and graph.

 

Finally, reviewer 1 commented that I needed to back up the high accuracy scores I recorded with some of the assessed models with proper scientific evidence. I was also asked to improve upon my review of related works. To address this, I deleted my previous literature review section and wrote a new one using the information I gleaned from recent and nuanced articles related to my research topic. I chose research papers reviewed in this work based on their recency and relevance to my research goals. To back up the accuracy scores and other performance metrics recorded in my work, I ensured I was clear in the description of my research methodology - from data collection and preprocessing to model development and evaluation. In the discussion section, I also compared my findings with those from some of the other related works I reviewed, showing how the other papers' conclusions backed up my findings.

 

I need to reiterate that I duly appreciate the review feedback I got from my previous submission, but rest assured that I have taken these feedback to heart and worked on them earnestly. As regards highlighting the change made in the paper, I would like to state that the paper has almost been entirely rewritten - from the introduction through to the discussion section.  This was mainly to ensure that I addressed all the comments from the earlier review. As such, there is no highlighted text in the newly submitted version, as I believe it counter-intuitive then to highlight over 90% of the submitted text.

 

I believe the changes I’ve made addressed your feedback, and I look forward to your positive response. Thank you.

Reviewer 3 Report (New Reviewer)

Comments and Suggestions for Authors

This research paper gives a clear view of how algorithmic ensemble models (particularly XGBoost and Gradient Boosting--with F1 scores at 90%) can provide a useful means for identifying fake from real profiles on social media platforms like Instagram.  Your paper also presents a useful real-world computational science approach for helping to mitigate the harms of social media over-use, addictive engagement, and excessive influencer "buy-in" by teens and young adults, in particular. 

In short, your paper provides a useful cybersecurity safeguarding solution to the many psychological and physical harms experienced by youth who regularly engage in social media, as testified by Facebook whistleblower Frances Haugen in her October 2021 testimony before the U.S. Senate.

Your literature review was clearly written, as was your algorithmic strategy for ascertaining fake from real profiles. Your study limitations regarding the utilization of a smaller profile sample size (i.e., <600) and only one social medium platform tested were openly discussed in your concluding remarks.

I am convinced that this set of study findings not only contributes to the field of cybersecurity guardrail implementation by social media platforms but provides fodder for further social media platform investigations by computer scientists utilizing larger numbers of profiles over a broader range of platforms.

Author Response

 

After receiving the reviews on my previously submitted paper, I immediately began addressing the comments. First, I was asked to improve upon the introduction and the references cited in my work in two of the three reviews I received. To address this, I rewrote the introduction to the paper and ensured all cited papers were integral to the point I was trying to make. Regarding the research design, I revamped my research methodology design by replacing the dataset with a much larger and more comprehensive dataset. I then improved the description and justification of my research methods, thus addressing the stipulated need to enhance my research design.

 

I improved upon the description of my research methods by providing a rundown of all the machine learning models used in my research, explaining what they are and why they were selected as part of the suite of assessed models in the paper. I improved upon the presentation of my results by creating a line graph that better illustrates the research findings and complements the table I had made to present the research results. In one of the reviews, I was also asked to improve my conclusion by ensuring the research results support it. I ensured this by clearly inferring my conclusions from the data observed from my research findings and presented via the table and graph.

 

Finally, reviewer 1 commented that I needed to back up the high accuracy scores I recorded with some of the assessed models with proper scientific evidence. I was also asked to improve upon my review of related works. To address this, I deleted my previous literature review section and wrote a new one using the information I gleaned from recent and nuanced articles related to my research topic. I chose research papers reviewed in this work based on their recency and relevance to my research goals. To back up the accuracy scores and other performance metrics recorded in my work, I ensured I was clear in the description of my research methodology - from data collection and preprocessing to model development and evaluation. In the discussion section, I also compared my findings with those from some of the other related works I reviewed, showing how the other papers' conclusions backed up my findings.

 

I need to reiterate that I duly appreciate the review feedback I got from my previous submission, but rest assured that I have taken these feedback to heart and worked on them earnestly. As regards highlighting the change made in the paper, I would like to state that the paper has almost been entirely rewritten - from the introduction through to the discussion section.  This was mainly to ensure that I addressed all the comments from the earlier review. As such, there is no highlighted text in the newly submitted version, as I believe it counter-intuitive then to highlight over 90% of the submitted text.

 

I believe the changes I’ve made addressed your feedback, and I look forward to your positive response. Thank you.

Round 2

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

The paper is quite improved but still requires more modifications

1.       There is no comparison in the related work section. There is a lot of literature available on Social media platforms and fake profiles. The author can read and compare algorithms for that fake profile detection technique and compare his technique

2.       Sections 4 and 5 are still too small, I suggest merging them and write clear results with more explanation of Performance Metrics and Models Performance

3.       Discussion must include limitations of the work

 

Comments on the Quality of English Language

Minor English corrections are required

Author Response

RE: Research Paper Review

In response to your latest feedback on my research paper, I have made the necessary modifications to address your comments. 

I have modified the literature review section ensuring to highlight research trends and infer comparisons between various relevant works in the research field. The modifications I made in this literature review section are highlighted in yellow. 

I have merged Sections 4 and 5, making it more detailed with clearer explanations. I explained the research results in more detail,  highlighting the relevance of each performance metric considered. I also added a subsection on "research limitations" in the discussion section. 

These changes, I believe, address your latest comments. Thank you.

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

The authors have addressed my comments except Features Overview explanation. Kindly explain the Features Overview in detail.

Comments on the Quality of English Language

English style is fine

Author Response

We've taken steps to address your latest comments on our paper. Specifically, we have added sub-subsections under the Dataset Description subsection that try to explain the dataset's features in more detail. To aid identification and review, all the newly added text is highlighted in yellow within the overleaf document. We hope this satisfactorily addresses your comment, Thank you.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper presents Algorithmic Approaches for Scam Profile Detection on Instagram

 

1.       The contribution in the paper is not clear. The author wrote, that this research evaluates the efficacy of various machine learning.  It is better to write that this paper presents Algorithmic Approaches for Scam Profile Detection based on …. And it is provided through classifiers including Logistic Regression, Decision Trees, Support  Vector Machines, Random Forest, K-Nearest Neighbors, XGBoost, Gradient Boosting, AdaBoost,  and Extra Trees.

2.       Introduction requires more background knowledge on used terminologies.

3.       Also list the structure of the paper before related work, for example, rest of the paper is structured as follows: Section 2 presents related work; section 3 presents….

4.       Literature review requires comparison with other published similar algorithms.

5.       Describe the source of the dataset

6.       Add limitations, in the discussion section

7.       Don’t put bullets in conclusions

Comments on the Quality of English Language

Minor corrections are required

Reviewer 2 Report

Comments and Suggestions for Authors

The author conducted research on how to detect scam profiles on Instagram using 10 machine learning algorithms. The author studied 576 samples and extracted 12 features. The research results concluded that XGBoost and Gradient Boosting showed the highest results. The paper was written to be very easy to read, and although research was conducted to detect scam profiles on Instagram using machine learning techniques, there are significant improvements found throughout the paper. Detailed comments are as follows.

1. The paper has a very simple overall structure. The main content of the paper is simply to compare and analyze 10 algorithms using a data set and show the results. There is a lack of content that suggests the author's novel ideas, new approaches, or new methodologies.

2. Typos were found in several places in the paper. The description of the second feature, Username Length, in Table I is incorrect. This part is copied directly from Table II. Even in 3.4, there is no content in the first bullet. In Table II, the values of Hyperparameters and Settings are the same in Ransom Forest and Extra Trees. If it is not an error, further explanation is needed. At the end of the conclusion, the sentence ends with ....

3. Significant improvement is needed in the quality of the paper. The reference section does not match the scientific paper citation style. Additionally, the number of papers referenced is quite small. The contribution of the paper is not clearly explained in the introduction. The overall novelty of the paper is weak. Additionally, the experimental dataset is relatively small for using machine learning algorithms. There are many parts in Table I that are not understood in Rational for Inclusion. For example, it was said that Anomalously high following counts can be a sign of fake profiles, but there is insufficient evidence as to why it was judged that way.

4. If a machine learning algorithm is used, an explanation is required as to why the algorithm was used. And an explanation is needed as to why other algorithms such as LSTM were not used. Since 3.5 is a standard Matrix, it is difficult to say it is new. It is also a good idea to mention in the paper the algorithm the author used or developed when using a machine learning model. Additionally, it would be better if there were additional explanations or data showing the reliability of the results in Table III. Explaining the strategy for feature selection is also good for improving the quality of the paper.

5. Some grandiose expressions are used in the conclusion, but it seems difficult to reach the same level as mentioned in the conclusion based on the results of this study alone. It is better to summarize the conclusion of the paper from a more realistic perspective.

Comments on the Quality of English Language

There is no major problem with English overall.

Back to TopTop