Next Article in Journal
A Review of Smart Materials in 4D Printing for Hygrothermal Rehabilitation: Innovative Insights for Sustainable Building Stock Management
Previous Article in Journal
Combined Toxicity of Polystyrene Nanoplastics and Pyriproxyfen to Daphnia magna
Previous Article in Special Issue
Young Children’s Digital Literacy Practices with Caregivers in the Home Environment: Voices of Chinese Parents and Grandparents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using the Theoretical-Experiential Binomial for Educating AI-Literate Students

by
Horia Alexandru Modran
1,*,
Doru Ursuțiu
1,2 and
Cornel Samoilă
1,3
1
Faculty of Electrical Engineering and Computer Science, Transilvania University of Brasov, 500036 Brasov, Romania
2
Academy of Romanian Scientists, 3 Ilfov, 050044 Bucharest, Romania
3
Technical Sciences Academy of Romania, 010413 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(10), 4068; https://doi.org/10.3390/su16104068
Submission received: 29 February 2024 / Revised: 26 April 2024 / Accepted: 11 May 2024 / Published: 13 May 2024

Abstract

:
In the dynamic landscape of modern education, characterized by an increasingly active involvement of IT technologies in learning, the imperative to transfer to university students the skills necessary to integrate Artificial Intelligence (AI) into the process represents an important goal. This paper presents a novel framework for knowledge transfer, diverging from traditional programming language-centric approaches by integrating PSoC 6 microcontroller technology. This framework proposes a cyclical learning cycle encompassing theoretical fundamentals and practical experimentation, fostering AI literacy at the edge. Through a structured combination of theoretical instruction and hands-on experimentation, students develop proficiency in understanding and harnessing AI capabilities. Emphasizing critical thinking, problem-solving, and creativity, this approach equips students with the tools to navigate the complexities of real-world AI applications effectively. By leveraging PSoC 6 as an educational tool, a new generation of individuals is efficiently cultivated with essential AI skills. These individuals are adept at leveraging AI technologies to address societal challenges and drive innovation, thereby contributing to long-term sustainability initiatives. Specific strategies for experiential learning, curriculum recommendations, and the results of knowledge application are presented, aimed at preparing university students to excel in a future where AI will be omnipresent and indispensable.

1. Introduction

We believe that in an era defined by rapid technological advancement and a growing awareness of sustainability, the intersection of innovation and responsibility has never been more crucial. Technology, with its ever-expanding capabilities, has woven itself intricately into every aspect of modern life, revolutionizing industries, shaping economies, and altering social dynamics. At the forefront of this technological revolution stands Artificial Intelligence (AI), a field characterized by its ability to analyze vast amounts of data, derive insights, and even make autonomous decisions. Among the several manifestations of AI, GPT (Generative Pre-trained Transformer) models represent a peak of natural language processing, a very useful tool even in the educational context [1]. Therefore, in today’s landscape, fostering AI literacy among students is not only about preparing them for the digital future but also about equipping them with the tools to navigate complex ethical issues and contribute to sustainable development goals.
Delving into existing studies, Ng et al. [2] conducted an exploratory review to establish a theoretical underpinning for defining, teaching, and assessing AI literacy. Through meticulous analysis of 18 peer-reviewed articles, their work identified four foundational pillars for nurturing AI literacy: knowledge and comprehension of AI, practical utilization of AI, critical evaluation of AI, and ethical engagement with AI. This seminal research underscores the imperative of integrating AI literacy into educational curricula, ensuring that students possess the competencies essential for navigating an increasingly AI-driven world.
Complementary to Ng et al.’s exploration, Kasinidou [3] underscored the universality of AI literacy, advocating for its dissemination across diverse demographic cohorts through participatory design paradigms. Kasinidou’s emphasis on understanding public perceptions of AI and fostering effective educational initiatives resonates with the broader imperative of democratizing access to AI literacy.
Further enriching our understanding, studies on AI literacy assessment instruments provide invaluable insights into the multidimensional nature of AI literacy among students. In another research conducted by Ng. et. al. [4], the authors utilized both exploratory and confirmatory factor analyses to delineate the underlying dimensions of AI literacy, emphasizing affective, behavioral, cognitive, and ethical facets in assessing students’ proficiency in AI technologies. Similarly, the work of Laupichler et al. [5] elucidated key factors influencing non-experts’ AI literacy, underscoring the significance of technical comprehension and ethical awareness. Augmenting these insights are empirical investigations into the efficacy of AI literacy interventions within educational contexts [6]. S. Kong et al. [7] demonstrated notable enhancements in university students’ conceptual grasp of Machine Learning and Deep Learning concepts following participation in AI literacy courses. Such endeavors not only bolster students’ self-perceived AI literacy but also underscore the centrality of conceptual scaffolding in facilitating comprehension. Cetindamar et. al. [8] explored the dimensions of AI literacy and identifies key capabilities essential for employees. The study highlighted technology-related, work-related, human-machine-related, and learning-related capabilities crucial for fostering AI literacy among non-technical professionals.
The study by Celik [9] highlighted the significant impact of computational thinking on enhancing AI literacy among higher education students. By investigating the relationships between AI literacy, digital divide, computational thinking, and cognitive absorption, the research emphasizes the importance of integrating computational thinking skills in educational programs to foster AI literacy among students.
Moreover, the integration of AI concepts into interdisciplinary educational frameworks, as advocated by Relmasira et al. [10], underscores the feasibility of AI literacy attainment across diverse educational paradigms. Similarly, the successful infusion of AI principles into maker education, as demonstrated by Ng et al. [11], underscores the transformative potential of experiential learning modalities in nurturing AI literacy. Lin et al. [12] revealed that AI literacy positively influences secondary school students’ computational thinking efficacy in learning AI.
The study by Wang et al. [13] enhanced the understanding of AI literacy and developed a 12-item scale to measure user competence in using Artificial Intelligence. Their research highlights the positive correlation between AI literacy and users’ attitudes, daily usage, and proficiency in AI technology. Domínguez Figaredo et al. [14] emphasized the importance of a stakeholder-first approach in AI education, highlighting the need to prioritize understanding and reflection on the societal impact of AI. By focusing on contextualized knowledge and adapting learning strategies, the study aimed to enhance AI literacy among diverse audiences. Long [15] provided a comprehensive exploration of AI literacy competencies and design considerations, offering valuable insights for educating non-technical learners about AI.
The systematic literature review on AI education interventions in K-12 settings [16,17] revealed a growing body of evidence supporting the effectiveness of teaching AI concepts to students. The review highlighted the importance of a learner-centered approach, context-aware pedagogical practices, and the measurement of AI learning outcomes for future research and practice in K-12 AI education.
The comparative analysis with other AI education methodologies sheds light on the distinctive advantages of the theoretical-experiential binomial approach proposed in this study. While purely theoretical approaches may offer a strong foundational understanding of AI concepts, they often lack practical application and hands-on experience. Conversely, purely practical approaches may focus on immediate application but overlook the critical thinking and problem-solving skills developed through theoretical understanding. By integrating theoretical knowledge with experiential learning, our framework strikes a balance that equips students with both a solid theoretical foundation and practical skills essential for navigating real-world AI challenges. This approach aims to promote creativity, innovation, and critical thinking among students, preparing the students for successful careers in the rapidly evolving field of Artificial Intelligence.
The research question of this study revolves around exploring the effectiveness of an educational framework that combines theoretical understanding with practical experimentation to enhance AI literacy among students. The study aims to investigate how this theoretical-experiential approach can foster creativity, innovation, critical thinking, and practical skills essential for successful careers in the field of AI.
The paper is structured into four main sections. Section 1 presents similar studies and covers the introduction of the educational framework. The second section describes the proposed framework together with two practical experiments using Internet of Things (IoT) devices, while Section 3 illustrates the results of our study and feedback on the framework’s effectiveness. The last section presents the final conclusions of this research.

2. Materials and Methods

Experiential learning holds significant importance in education due to its ability to actively engage students in the learning process, leading to the enhanced understanding, retention, and application of knowledge [18]. By involving students in hands-on experiences, simulations, and real-world applications, experiential learning goes beyond traditional passive learning methods, such as lectures and readings. This approach allows students to connect theoretical concepts to practical situations, fostering critical thinking, problem-solving skills, and creativity. Additionally, experiential learning promotes a deeper level of understanding and long-term retention of information by immersing students in meaningful learning experiences. Experiential learning plays a vital role in preparing students for real-world challenges, equipping them with the skills and knowledge needed to succeed in their academic and professional endeavors. Morris [19] conducted a thorough examination of Kolb’s experiential learning model [20], highlighting the need for empirical testing and potential revisions to enhance its applicability.
The current study employs a structured approach delineated into five main sections to elucidate the process of fostering AI literacy among students, encapsulated within the theoretical-experiential binomial framework. The process for this complete learning cycle is illustrated in Figure 1.
As the diagram shows, the educational framework proposed herein delineates a structured approach comprising five sequential stages to facilitate comprehensive learning and skill acquisition in Artificial Intelligence (AI):
  • Theoretical fundamentals: This initial stage acquaints students with foundational concepts in Machine Learning (ML) and Deep Learning (DL). Through theoretical instruction, students develop a solid understanding of ML algorithms and DL neural networks, laying the groundwork for subsequent practical applications.
  • AI experiments: Building upon the theoretical foundation established in Stage 1, students’ progress to engaging in a series of AI experiments (from Experiment 1 to Experiment n). These experiments serve as experiential learning modules, enabling students to apply theoretical knowledge to real-world scenarios and tasks.
  • Critical thinking: As students advance through the experimental phase, critical thinking skills are cultivated. They learn to analyze data, challenge assumptions, pose insightful questions, and reflect on their thought processes. This critical thinking capacity enhances their ability to discern patterns, evaluate outcomes, and make informed decisions in AI experimentation.
  • Experimentation and introspective analysis: In this stage, students delve deeper into experimentation, conducting iterative analyses and fostering introspection. Through hands-on exploration and reflection, they cultivate wisdom in navigating the intricacies of AI systems. This introspective analysis aids in refining experimental methodologies and optimizing AI solutions.
  • Creativity and implementation: The final stage emphasizes the development of creativity and innovation in AI implementation. Equipped with a comprehensive understanding of theoretical principles, critical thinking skills, and practical experimentation experience, students are empowered to conceptualize novel solutions and think innovatively. They gain proficiency in designing and implementing experiments tailored to specific objectives, demonstrating versatility and adaptability in addressing diverse AI challenges.
In our view, this structured progression through five distinct stages ensures a holistic educational experience, equipping students with the requisite knowledge, skills, and mindset to navigate the complexities of AI applications effectively. By integrating theoretical instruction with hands-on experimentation and fostering critical thinking and creativity, this educational framework promotes AI literacy and empowers students to become adept practitioners in the field of Artificial Intelligence.
As described previously, in the second stage of the educational framework, several AI experiments were conducted, culminating in the development of various applications. Among these, two pivotal experiments are highlighted in this paper for their significance in AI literacy. The first experiment focuses on the detection and classification of musical notes, speech, and background noise, showcasing the capability of AI algorithms to discern auditory signals in real-time environments. The second experiment centers on human activity recognition, wherein AI models are trained to recognize and categorize diverse human movements and behaviors with high accuracy. These experiments exemplify the practical application of theoretical concepts in real-world scenarios, providing valuable insights into the capabilities and limitations of AI technology in addressing multifaceted challenges across different domains.

2.1. Detecting Musical Notes, Speech, and Background Noise

The technology for extracting musical information, still under development, is an important component of music technology, with increasingly integrated methods of Artificial Intelligence in this field. Note detection and recognition represent a branch of musical information extraction and constitute a significant research theme in the domain of audio signal analysis and processing [21]. While recurrent neural networks are typically preferred for time series, fully connected neural networks are often favored for Edge devices, such as PSoC 6 (Programmable System on a Chip), due to their parallelism energy and computational efficiency [22].
In this context, the developed experiment proposes automatic real-time detection of musical notes, speech, and background noise using a Deep Learning model based on a fully connected neural network. The experiment utilized the SensiML plugin, which aids in collecting data from PSoC 6 through attached sensors, also providing methods for labeling the captured data. The experiment consists of the following steps:
  • audio data acquisition and annotation,
  • applying signal pre-processing techniques to the acquired data,
  • designing and training a classification algorithm, and
  • implementing an intelligent model optimized for the IoT device.
The development of real-world Edge AI applications requires high-quality annotated data. The SensiML data capture application facilitates the collection, annotation, and exploration of sensor time series data, proving to be valuable even for students [23].

2.1.1. Data Acquisition and Annotation

The data were acquired using the microphone on the CY8KIT-028-TFT shield. It contains a digital microphone with a Pulse-Width Modulation (PDM) output on a single bit, allowing the conversion of any acquired sound into a digital signal. The PSoC 6 device converts this digital signal into a quantized 16-bit Pulse-Core Modulation (PCM) value. An interrupt is triggered when there are sufficient data to be processed, specifically at least 128 samples.
Furthermore, the students were instructed that to avoid bias, it is important for the acquired audio data to be as clean as possible and to ensure a wide diversity (musical notes from various instruments, different voices in speech, etc.).
After programming the data acquisition application on the PSoC 6 device, data were acquired at a sampling rate of 16,000 Hz. Numerous segments were saved for each of the following musical notes: D, E, F, G, A, B. Upon completion of data acquisition, students were instructed on how to label the data in the Data Capture Lab (Figure 2).
In addition to musical notes, multiple files containing speech and ambient noise audio data were acquired and labeled. Furthermore, after acquisition, the data were automatically uploaded to the Cloud. Through the Cloud portal, the data could be visualized, and all labels, as well as their distribution, could be analyzed. To ensure that students gained the necessary skills to build an appropriate dataset, a similar number of segments were created for each note, while for speech and noise, more segments were generated, considering their greater variety and more complex characteristics.

2.1.2. Machine Learning Model Design and Training

To build a classification Deep Learning model, a Tensorflow Lite pipeline for Microcontrollers was implemented.
The next step involves adding a filter and configuring the entire Pipeline. In this stage, the following elements were configured, with students assimilating the meaning and role of each parameter (Figure 3):
  • Windowing: Segmentator of size 400—takes input from the sensor transformation/filter step.
  • Frequency domain feature generator—a collection of feature generators that process the data segment to extract meaningful information.
  • Data balancing: Undersample Majority Classes—creates a balanced dataset by undersampling the majority classes using random sampling without replacement.
  • Feature quantization: Min-max scaler—normalizes and scales the data to integer values between min_bound and max_bound.
  • Outlier filter: Z-score filter—filters feature vectors that have values outside of a limit threshold (threshold set at 3).
  • Classifier: TensorFlow Lite for Microcontrollers—takes a feature vector as input and returns a classification based on a predefined model.
  • Training algorithm: Fully connected neural network, which includes the following features:
    • Dense layers of sizes (number of neurons) 128, 64, 32, 16, 8.
    • Learning rate of 0.01.
    • Softmax activation for the final layer.
    • Number of epochs: 4.
    • Threshold of 0.8.
  • Categorical cross-entropy loss function.
  • Validation: Stratified Shuffle Split—the validation scheme splits the dataset into training, validation, and testing sets with similar label distributions.
  • Validation parameters—accuracy, F1 score, sensitivity.
After optimization, a model was generated whose characteristics were graphically displayed. As demonstrated by Table 1, the model achieved very good performance indicators, both in terms of accuracy, sensitivity, and F1 score, as well as the size of the classifier.

2.1.3. Model Testing and Validation

For students to better understand and analyze the cases where the classifier predicted correctly or incorrectly, both the confusion matrix on the training set and one on the validation set were determined (Figure 4).
From Figure 4 and Figure 5, it can be inferred that the model achieved very good and similar performances both on the training dataset (data that the model has previously seen and provided during the training phase) and on the validation dataset, which the network has not seen before. The only erroneous predictions, also observed by the students, were regarding speech and only in specific cases—when the sound intensity was too low and there was background noise as well.
Multiple values were tested for the number of epochs, and through graphical analysis, it was concluded that the optimal number in terms of both training and prediction time, as well as accuracy, was 4. Figure 6 displays the accuracy obtained at each training and validation epoch. This analysis revealed that after 4 epochs, both accuracy and loss did not significantly improve further.
Considering that the model performed very well on the test and validation data, it can be run by students in real time to validate its functionality on the PSoC 6 device.
Before deploying the Machine Learning model on the IoT device, predictions can be visualized in real-time in the Data Capture Lab. The application will run the model on the data captured in real time by the microphone connected to the PSoC 6 device. The classification results generated by the model are added to the history and can be graphically visualized in real-time, along with the latest prediction made (Figure 6).
The application program on the PSoC 6 sends real-time predictions via UART, which can be visualized by students either through Putty or Open Gateway or on the TFT screen connected to the PSoC6 (Figure 7). Through Open Gateway, a serial connection can be established to the COM port allocated to the IoT device. After establishing the connection, in the testing mode, the current prediction is displayed along with the history of previous predictions.
The performances in this stage have proven to be as good as in the validation stage. The exploration of musical note recognition within the context of AI literacy serves as a hands-on application of theoretical concepts, providing students with a tangible and engaging learning experience. This experiment encourages students to explore the interdisciplinary nature of AI literacy, bridging the gap between technology and creative arts. By engaging in such hands-on activities, they can develop a holistic understanding of AI’s potential applications across various fields, preparing them for the multidimensional challenges of the digital age.
Furthermore, it not only reinforces students’ comprehension of AI concepts but also fosters critical thinking, problem-solving, and creativity. Through this exercise, students learn how AI algorithms can be utilized in diverse scenarios, developing their ability to analyze data, make informed decisions, and innovate solutions.

2.2. Classification of Human Activities by Edge Techniques

The field of human activity recognition has become one of the current research topics due to the availability of sensors and accelerometers, low energy consumption, real-time data transmission, and advancements in Artificial Intelligence, Machine Learning, and IoT. Through this, various human activities can be recognized, such as walking, running, sleeping, standing, driving, abnormal activities, etc. Students can further develop the experiment for widespread use in medical diagnosis, monitoring the elderly, and creating a smart home. Additionally, driving activity can also be recognized, enhancing traffic safety. The experiment was programmed and run on the PSoC 6 microcontroller, developed by Infineon.
As a general rule, the process of developing a model for recognizing human activities consists of four main stages (Figure 8):
  • Acceleration signal acquisition.
  • Data preprocessing.
  • Activity recognition (based on Deep Learning techniques).
  • User interface for transmitting and displaying the prediction.
This experiment accomplishes the classification of human activities based on motion sensor data (accelerometer and gyroscope). A model program on the IoT device was pre-trained on a computer using Keras and classifies a few common activities: stationary, walking, and running.
The operation of the application was described and explained to the students through a block diagram (Figure 9). In an infinite loop, the IoT device reads data from a motion sensor (BMX160, Bosch Sensortec, Reutlingen, Germany) attached to the PSoC 6 to detect activities. The dataset consists of orientation data on 3 axes from both accelerometer and gyroscope. A timer is configured to interrupt at 128 Hz. The interrupt handler reads all 6 axes via SPI and signals a data processing task when the internal buffer has 128 new samples. It applies an IIR filter and min-max normalization to 128 samples simultaneously. These processed data are then transmitted to the inference processor. The inference processor determines and returns the prediction confidence for each activity class. If the confidence exceeds an 80% threshold, the predicted activity is displayed on the UART terminal. This application utilizes FreeRTOS. Within it, a system task—the activity task—was defined and executed, which processes received data and forwards it to the ML model.

2.2.1. Data Acquisition

The data for training the Machine Learning model were collected from multiple users during various activities, using the BMX160 sensor attached to the PSoC 6, then labeled according to activity and saved in a CSV (Comma Separated Value) file. Prior consent was obtained from the users through an IRB agreement, and the acquired and stored data comply with the General Data Protection Regulation (GDPR). When saving new data or gestures, a Python script can be used, which takes parameters such as the activity name and the person collecting the data.
The collected data were graphically displayed for analysis and cleaning purposes (Figure 10). To ensure the relevance of the data, students need to be aware that it should come from multiple individuals of diverse ages and anatomies.

2.2.2. AI Model Development

After data collection, a model was developed using that data—a step that included both training and calibrating the model. For this problem, a neural network model was built in Python (version 3.12) using the Keras library. Before the training stage, the entire process was presented to the students: data preprocessing, cleaning, and random splitting of the dataset into training, validation, and testing sets. Following this, the data were converted into a standardized format, activities that the model could classify were generated, and the final calibration of the model was performed.
The model was then trained using the collected data, employing the following features:
  • Adam optimizer.
  • Learning rate of 0.0001.
  • Metrics: accuracy, confusion matrix.
  • 20 epochs and 1000 steps for each epoch.
The confusion matrix was displayed graphically so that students could visualize the classification performance of the model (Figure 11).
The weights and structure of the model were then saved in a file to be used by students for programming the IoT device, and its validation and calibration data were saved in a separate file. Performance indicators of the model are illustrated in Table 2, and Figure 12 presents the characteristics of the final model. The Convolutional Neural Network (CNN) model consists of two convolutional blocks and two fully connected layers.
Each convolutional block includes convolutional operations, including Rectified Linear Unit (ReLU) activation and the dropout layer, with the addition of a flattened layer after the first block. The convolutional layers act as feature extractors and provide abstract representations of input sensor data in the feature map. They capture short-term dependencies (spatial relationships) in the data. In the developed network, features are extracted and then used as inputs to a fully connected network, using Softmax activation for classification.
Following this, the generated model was programmed by students onto the device for testing. The ML Configurator also shows the resources consumed by the model, which are suitable for any microcontroller. The model was validated on PC, yielding very good results for both 8 × 8 and floating-point quantization. Before being programmed onto the PSoC 6 device, the model needs to be checked to validate if it is optimized for the available hardware on that device. Following the validation performed in the ML Configurator, a 100% accuracy, and 0.01 prediction error were achieved (Figure 13).
After programming the device, through the serial connection to the computer, it displays on the UART terminal the prediction in real-time, together with the confidence percentage for each class (Figure 14).
This experiment showcases the potential of AI models to analyze complex data patterns and make informed decisions based on input signals. By engaging in this hands-on activity, students enhance their skills in data analysis, model training, and result interpretation, contributing to their overall proficiency in AI technology. Furthermore, the experiment highlights the importance of data quality and feature selection in training AI models for human activity recognition. By optimizing the input data and refining the model architecture, students can improve the accuracy and efficiency of the AI system, demonstrating the critical role of data preprocessing in enhancing model performance. Therefore, students are not only able to deepen their understanding of AI concepts, but they can also develop essential skills in model development, data analysis, and problem-solving, preparing them for future challenges in the field of Artificial Intelligence.

3. Results and Discussion

This section summarizes the specific conclusions of this article and suggests opportunities and recommendations for further research. The research was conducted with the assistance of the Center for Valorization and Skills Transfer (CVTC) at Transilvania University of Brașov, Romania, in partnership with the Faculty of Electrical Engineering and Computer Science. The participants in our study consisted of 93 Romanian students ranging in age from 21 to 23 years old, both male and female. These students were predominantly from IT study fields, forming a homogeneous demographic group for our research.
Following the implementation of the structured experimental learning process in the field of Artificial Intelligence (AI) according to the theoretical-experiential binomial, promising results were obtained in promoting literacy among students. The well-defined stages of this process represent an important step in enhancing students’ understanding, practical skills, and critical thinking in the field of AI.
By implementing the proposed educational framework, the structured process in five distinct stages ensures a holistic educational experience. In the first stage, learning the theoretical fundamentals of AI provided students with a solid knowledge base in the realm of Machine Learning and Deep Learning. This allowed them to progress to the next stage, engaging in practical experiments in AI.
As they progressed through experiments, students developed critical thinking skills, analyzing data, questioning assumptions, and evaluating results. Experimentation and introspective analysis allowed students to gain a deeper understanding of the complexities of AI systems and identify ways to optimize them.
The final stage emphasized the development of creativity and innovation in implementing AI algorithms. Students were encouraged to conceptualize new solutions and approach challenges in the field of AI innovatively. They gained competencies in designing and implementing experiments tailored to specific objectives.
The two practical experiments presented in this study highlighted the direct application of theoretical concepts in real-world scenarios.
The first experiment had a significant educational impact, with students gaining a profound understanding of:
  • Data acquisition and annotation process: The importance of a diverse collection of high-quality and correctly labeled data was highlighted through practical exercises.
  • Signal pre-processing techniques: Filtering, normalization, and other techniques were implemented and analyzed, strengthening the theoretical knowledge.
  • Neural network design and training: Network architecture, training parameters, and model optimization were studied and tuned, providing valuable practical experience.
  • Implementing models on IoT devices: Hardware limitations and resource optimization were considered, preparing students for real-world challenges.
The second experiment facilitated the following aspects:
  • Principles of activity recognition: Students grasped the steps of data acquisition, preprocessing, classification, and user interface.
  • Development of Convolutional Neural Network models: Specific architecture, model optimization, and validation were studied in a practical context.
  • Utilization of sensors and IoT devices: Direct experimentation with motion sensors, chip programming, and data interpretation was conducted.
  • Ethics and responsibility in AI: Considerations regarding data collection, privacy, and bias were successfully integrated into the learning process.
Through the analysis of AI models’ performance on the PSoC 6 device, it has been demonstrated that these models can be efficiently implemented and provide reliable real-time results. The performance of the human activity classification model achieved very high accuracy, confirming the effectiveness of the proposed technologies in training students in the field of AI.
Despite the significant benefits brought by the experimental learning process in the field of Artificial Intelligence, certain limitations need to be addressed to enhance its efficiency and accuracy. One of the main limitations is the availability and quality of the datasets used in experiments. The quality of the data can significantly influence the model’s performance and its ability to generalize to new scenarios. Moreover, access to adequate equipment and computational resources can be challenging for certain institutions or students.
Another important limitation is related to the complexity and abstraction of some theoretical concepts in the field of AI, which may be difficult for some students to grasp without proper educational support. Additionally, there may be difficulties in adapting theory to the practical requirements of different applications or areas of interest. To overcome these limitations, some improvements and adjustments to the experimental learning process in the field of AI are necessary. Firstly, it is important to pay increased attention to data collection and processing, ensuring that they are representative and of high quality. It is also essential to provide students with adequate resources and tools to conduct experiments efficiently.
Moreover, it is necessary to develop educational materials and teaching methods that facilitate the understanding of complex theoretical concepts in a more accessible and interactive manner. Integrating case studies and practical projects into the curriculum can provide students with additional opportunities to apply theoretical knowledge in relevant contexts and acquire practical skills in solving real-world problems in the field of AI. By adopting a balanced approach between theory and practice and providing adequate resources and support, the experimental learning process in the field of AI can be significantly improved, thereby preparing students for future challenges and opportunities in this continuously evolving domain.
We believe that closer integration of collaboration and teamwork in the learning process could significantly enhance students’ experience. Teamwork can facilitate the exchange of ideas and experiences among students, encouraging collaboration and creativity. This could better prepare students for the professional environment, where AI projects are often developed in multidisciplinary teams.
Thus, from a sustainability perspective, developing practical skills in the field of AI prepares students for successful careers in rapidly expanding fields. The use of IoT devices contributes to interactive learning, making the educational framework more flexible and adaptable to future needs.
Students participating in the theoretical-experiential binomial approach to AI education immersed themselves in a range of engaging experiential learning activities aimed at bridging theoretical knowledge with practical application. They collaborated in teams to preprocess the sensor data, train predictive models, and deploy the system on IoT devices to monitor equipment health in real-time. During these activities, students encountered diverse challenges that tested their problem-solving skills and critical thinking abilities. One notable challenge involved the integration of complex AI algorithms into IoT devices, requiring students to navigate compatibility issues and optimize code for efficient performance. Additionally, students faced data quality challenges, such as incomplete datasets and noisy inputs, which necessitated robust preprocessing techniques to ensure accurate model training. To overcome these hurdles, students engaged in collaborative problem-solving sessions, leveraging peer feedback and mentor guidance to refine their approaches. Through iterative experimentation, meticulous data analysis, and innovative algorithmic adjustments, students successfully tackled these challenges, demonstrating resilience, adaptability, and a deepening understanding of AI concepts in real-world applications.
For assessing the proposed framework, a comprehensive approach to measure the competencies acquired by students was employed. The assessment methodology encompassed both quantitative and qualitative measures to ensure a nuanced understanding of the students’ skill development. Quantitatively, standardized assessments and surveys tailored to evaluate specific competencies related to AI literacy were utilized, such as knowledge retention, critical thinking abilities, and practical application skills, which are very important according to the AI Competency Framework [24]. These quantitative measures provided numerical data that allowed for statistical analysis and comparison of pre- and post-intervention competencies. In addition to quantitative measures, qualitative methods were employed, as well such as interviews, focus groups, and reflective journals to capture the nuanced aspects of students’ learning experiences, including their perceptions, attitudes, and deeper insights gained from the educational intervention. By incorporating both quantitative and qualitative measures, the aim was to provide a comprehensive assessment of the impact of our educational intervention on students’ competencies in navigating AI applications effectively.
To evaluate the proposed approach, a questionnaire consisting of 10 questions has been developed, addressing aspects such as the usefulness of educational materials, the relevance of practical experiments, student satisfaction, and the impact on the development of specific AI domain competencies. The questions and response options of this questionnaire are presented in Table 3.
  • Offering opportunities for peer collaboration or discussion forums.
  • Providing additional resources or Supplementary Materials.
  • The pace of the course could be adjusted to the needs of each student.
  • Including more interactive activities or practical exercises.
Student feedback was collected through an online anonymous survey on Google Forms (https://docs.google.com/forms/d/e/1FAIpQLSdkXBucogpGud5ceaS7ptIiwNPBMZ3U-YWk0dDRZfoQuHTPdQ/viewform?vc=0&c=0&w=1&flr=0, accessed on 10 May 2024). Students expressed positive responses so far, appreciating the proposed approach, finding the practical experiments helpful for better understanding theoretical concepts and rating the experience with a high grade (Figure 15). Most of the respondents were overall satisfied with this learning experience, but they identified few aspects of the educational framework that could be improved, which will be included for the further development of this framework:
The qualitative evaluation of our educational framework was conducted through a triangulation of methods: interviews, focus groups, and reflective journals. Each method contributed equally to the overall assessment of AI literacy competencies.
Interviews were structured to probe deeper into students’ understanding of AI concepts. The data collected from these interviews were subjected to thematic analysis, where responses were coded and categorized into themes related to knowledge retention, critical thinking, and practical application. This process allowed us to identify patterns and draw correlations between the educational intervention and the development of specific AI competencies. From the individual interviews conducted with a subset of 30 students, we gathered rich qualitative data. Students reported a heightened awareness of AI’s ethical implications, while the thematic analysis revealed a recurring theme of increased confidence in discussing and propose AI implementations, suggesting that the intervention successfully improved their creativity and critical thinking competencies.
Focus groups were facilitated to encourage peer-to-peer interaction and collective reflection. The discussions were recorded and transcribed verbatim. Subsequently, a content analysis was performed to quantify the frequency of specific competencies mentioned by the participants. This quantitative approach within a qualitative method provides a nuanced understanding of the group’s perception of the educational framework’s effectiveness. The focus group discussions, involving six groups of approximately 15 students each, provided a collaborative platform for students to share their learning experiences. The content analysis indicated a significant emphasis on the practical application of AI, with students actively discussing their projects and the challenges they faced. This highlights the framework’s effectiveness in engaging students with real-world AI problems.
Reflective journals offer a longitudinal perspective on individual learning progressions. Students’ entries were analyzed using a narrative analysis technique, which involves constructing a chronological narrative that illustrates the evolution of their competencies over time. This method highlights personal growth and the acquisition of AI-related skills throughout the intervention. Analysis of the reflective journals, maintained by all 93 students over the course of the intervention, offered longitudinal insights into their learning trajectories. Narrative analysis showed a clear progression in students’ ability to integrate AI knowledge into their existing skill sets. Many journals reflected a journey from theoretical understanding to practical application, with students documenting their personal growth and increased AI literacy.
Together, these qualitative methods form a comprehensive evaluation strategy. The data derived from these methods are integrated to present a holistic view of the educational intervention’s impact. The triangulation of data ensures reliability and validity, providing a robust foundation for conclusions about the framework’s effectiveness in fostering AI literacy.
Compared to similar studies, our research has some important advantages. In contrast to the study conducted by Ng et al. [4], which emphasized the affective, behavioral, cognitive, and ethical facets of AI literacy, our research places a stronger emphasis on the role of experiential learning in enhancing students’ proficiency in AI technologies. While Laupichler et al. [5,6] highlighted the importance of technical comprehension and ethical awareness in non-experts’ AI literacy, our study extends this understanding by demonstrating the effectiveness of hands-on experiences and real-world applications in fostering these competencies. Furthermore, our results echo the findings of [7,8], who reported improvements in students’ understanding of Machine Learning and Deep Learning concepts following AI literacy courses. However, our study further contributes to this body of literature by demonstrating the effectiveness of our proposed educational framework, which combines theoretical understanding with practical experimentation.

4. Conclusions

In contrast to prior literature, our framework stands out for its integration of theoretical understanding and hands-on experience. According to the results of the survey, a balance that empowers students with a robust theoretical grounding alongside vital practical skills crucial for tackling real-world AI obstacles was achieved. The aim of this study is to validate this approach and to demonstrate its effectiveness in nurturing AI literacy and fostering creativity, innovation, and critical thinking among students. Moreover, our analysis identifies both predictable and unpredictable outcomes, further solidifying the reliability and adaptability of our framework in preparing students for dynamic careers within the ever-evolving landscape of Artificial Intelligence.
Taking into consideration the collected results on the conducted survey, the theoretical-experimental binomial shows promising results for educating students in the field of Artificial Intelligence. The practical implementation of models on IoT devices reinforces students’ understanding and allows them to experiment with theoretical concepts directly. The educational framework presented can be easily adapted to various application domains of Artificial Intelligence, contributing to preparing students for successful careers in rapidly expanding fields.
This approach brings significant contribution for students in the following aspects:
  • Development of practical skills: Programming skills, data analysis, critical thinking, and problem-solving abilities have been cultivated through concrete experiments.
  • Consolidation of theoretical understanding: Practical implementation has reinforced theoretical concepts, providing a holistic perspective on AI.
  • Stimulating creativity and innovation: Practical experiments have allowed students to explore innovative solutions and personalize their projects.
  • Preparation for careers in AI: The acquired skills are essential for a successful career in the field of Artificial Intelligence.
The proposed educational framework offers a range of sustainable benefits, from access to interactive and industry-relevant learning experiences to the development of transferable skills adaptable to various AI application domains. The experimental approach fosters creativity and innovation, while the flexibility of the framework allows adaptation to diverse needs and available resources. Implementing this educational framework will provide students with quality preparation, turning them into competent and innovative professionals capable of contributing to societal progress.
In light of the study’s findings, several possibilities for further research emerge. First and foremost, exploring the scalability and adaptability of the proposed framework across different educational contexts warrants attention. Investigating its effectiveness in diverse settings, such as K-12 schools, universities, and vocational training programs, would provide valuable insights. Additionally, longitudinal studies tracking students’ AI literacy development over extended periods could shed light on the long-term impact of this educational approach.
Furthermore, delving into specific pedagogical strategies within the theoretical-experiential binomial framework is essential. For instance, understanding the optimal balance between theoretical lectures and hands-on projects would enhance instructional design. Additionally, incorporating collaborative learning experiences—where students work in multidisciplinary teams—could foster problem-solving skills in real-world applicability. Considering the rapid evolution of AI technologies, continuous updates to the curriculum are crucial. Integrating emerging topics such as ethical AI, interpretability, and bias mitigation would ensure that students remain well prepared for the dynamic AI landscape.

Author Contributions

Conceptualization, H.A.M. and D.U.; methodology, C.S.; software, H.A.M.; validation, D.U. and C.S.; formal analysis, H.A.M. and D.U..; investigation, H.A.M.; resources, H.A.M., C.S. and D.U.; data curation, H.A.M.; writing—original draft preparation, H.A.M.; writing—review and editing, D.U. and C.S.; visualization, D.U. and C.S.; supervision, D.U.; project administration, C.S.; funding acquisition, D.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to express our deep appreciation to the Cypress/Infineon company for providing us with free PSoC6 kits, facilitating this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Modran, H.; Chamunorwa, T.; Ursuțiu, D.; Samoilă, C. Integrating Artificial Intelligence and ChatGPT into Higher Engineering Education. In Towards a Hybrid, Flexible and Socially Engaged Higher Education; Auer, M.E., Cukierman, U.R., Vendrell Vidal, E., Tovar Caro, E., Eds.; Lecture Notes in Networks and Systems; ICL: Tel Aviv, Israel; Springer: Berlin/Heidelberg, Germany, 2023; Volume 899. [Google Scholar] [CrossRef]
  2. Ng, D.; Leung, J.; Chu, K.; Qiao, M. AI Literacy: Definition, Teaching, Evaluation and Ethical Issues. Proc. Assoc. Inf. Sci. Technol. 2021, 58, 504–509. [Google Scholar] [CrossRef]
  3. Kasinidou, M. AI Literacy for All: A Participatory Approach. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2 (ITiCSE 2023), Turku, Finland, 7–12 July 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 607–608. [Google Scholar] [CrossRef]
  4. Ng, D.; Wu, W.; Leung, J.; Chu, S. Artificial Intelligence (AI) Literacy Questionnaire with Confirmatory Factor Analysis. In Proceedings of the 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA, 10–13 July 2023; pp. 233–235. [Google Scholar] [CrossRef]
  5. Laupichler, M.; Aster, A.; Haverkamp, N.; Raupach, T. Development of the “Scale for the assessment of non-experts’ AI literacy”—An exploratory factor analysis. Comput. Hum. Behav. Rep. 2023, 12, 100338. [Google Scholar] [CrossRef]
  6. Laupichler, M.; Aster, A.; Perschewski, J.-O.; Schleiss, J. Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment. Educ. Sci. 2023, 13, 978. [Google Scholar] [CrossRef]
  7. Kong, S.; Cheung, W.; Zhang, G. Evaluating artificial intelligence literacy courses for fostering conceptual learning, literacy and empowerment in university students: Refocusing to conceptual building. Comput. Hum. Behav. Rep. 2022, 7, 100223. [Google Scholar] [CrossRef]
  8. Cetindamar, D.; Kitto, K.; Wu, M.; Zhang, Y.; Abedin, B.; Knight, S. Explicating AI Literacy of Employees at Digital Workplaces. IEEE Trans. Eng. Manag. 2024, 71, 810–823. [Google Scholar] [CrossRef]
  9. Celik, I. Exploring the Determinants of Artificial Intelligence (AI) Literacy: Digital Divide, Computational Thinking, Cognitive Absorption. Telemat. Inform. 2023, 86, 102026. [Google Scholar] [CrossRef]
  10. Relmasira, S.; Lai, Y.; Donaldson, J. Fostering AI Literacy in Elementary Science, Technology, Engineering, Art, and Mathematics (STEAM) Education in the Age of Generative AI. Sustainability 2023, 15, 13595. [Google Scholar] [CrossRef]
  11. Ng, D.; Su, J.; Chu, S. Fostering Secondary School Students’ AI Literacy through Making AI-Driven Recycling Bins. Educ. Inf. Technol. 2023. [Google Scholar] [CrossRef]
  12. Lin, X.; Zhou, Y.; Shen, W. Modeling the structural relationships among Chinese secondary school students’ computational thinking efficacy in learning AI, AI literacy, and approaches to learning AI. Educ. Inf. Technol. 2023, 29, 6189–6215. [Google Scholar] [CrossRef]
  13. Wang, B.; Rau, R.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  14. Domínguez Figaredo, D.; Stoyanovich, J. Responsible AI literacy: A stakeholder-first approach. Big Data Soc. 2023, 10. [Google Scholar] [CrossRef]
  15. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ‘20), Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–16. [Google Scholar] [CrossRef]
  16. Su, J.; Guo, K.; Chen, X.; Chu, S. Teaching artificial intelligence in K-12 classrooms: A scoping review. Interact. Learn. Environ. 2023, 1–20. [Google Scholar] [CrossRef]
  17. Rizvi, S.; Waite, J.; Sentance, S. Artificial Intelligence teaching and learning in K-12 from 2019 to 2022: A systematic literature review. Comput. Educ. Artif. Intell. 2023, 4, 100145. [Google Scholar] [CrossRef]
  18. McCarthy, M. Experiential Learning Theory: From Theory To Practice. J. Bus. Econ. Res. (JBER) 2016, 14, 91–100. [Google Scholar] [CrossRef]
  19. Morris, T. Experiential learning—A systematic review and revision of Kolb’s model. Interact. Learn. Environ. 2020, 28, 1064–1077. [Google Scholar] [CrossRef]
  20. Kolb, D. Experiential Learning: Experience as the Source of Learning and Development, 2nd ed.; Pearson: Upper Saddle River, NJ, USA, 2015; ISBN 978-0-13-389240-6. [Google Scholar]
  21. Yue, Y. Note Detection in Music Teaching Based on Intelligent Bidirectional Recurrent Neural Network. Hindawi Secur. Commun. Netw. 2022, 2022, 8135583. [Google Scholar] [CrossRef]
  22. Brusa, E.; Delprete, C.; Di Maggio, L. Deep transfer learning for machine diagnosis: From sound and music recognition to bearing fault detection. Appl. Sci. 2021, 11, 11663. [Google Scholar] [CrossRef]
  23. SensiML Data Capture Lab Documentation. Available online: https://sensiml.com/documentation/data-capture-lab/index.html (accessed on 20 February 2024).
  24. Artificial Intelligence Competency Framework—A Success Pipeline from College to University and Beyond, Dawson College. Available online: https://www.dawsoncollege.qc.ca/ai/wp-content/uploads/sites/180/Corrected-FINAL_PIA_ConcordiaDawson_AICompetencyFramework.pdf (accessed on 15 April 2024).
Figure 1. The complete learning cycle [20].
Figure 1. The complete learning cycle [20].
Sustainability 16 04068 g001
Figure 2. Labeling musical notes.
Figure 2. Labeling musical notes.
Sustainability 16 04068 g002
Figure 3. Features of the pipeline for the ML algorithm.
Figure 3. Features of the pipeline for the ML algorithm.
Sustainability 16 04068 g003
Figure 4. Confusion matrix for validation dataset.
Figure 4. Confusion matrix for validation dataset.
Sustainability 16 04068 g004
Figure 5. Accuracy obtained for each epoch.
Figure 5. Accuracy obtained for each epoch.
Sustainability 16 04068 g005
Figure 6. Real-time model validation.
Figure 6. Real-time model validation.
Sustainability 16 04068 g006
Figure 7. Printing the prediction on the TFT screen.
Figure 7. Printing the prediction on the TFT screen.
Sustainability 16 04068 g007
Figure 8. Stages of development of a model of recognition of human activities.
Figure 8. Stages of development of a model of recognition of human activities.
Sustainability 16 04068 g008
Figure 9. Block diagram of the developed application.
Figure 9. Block diagram of the developed application.
Sustainability 16 04068 g009
Figure 10. Acquired data.
Figure 10. Acquired data.
Sustainability 16 04068 g010
Figure 11. Confusion matrix.
Figure 11. Confusion matrix.
Sustainability 16 04068 g011
Figure 12. Convolutional Neural Network structure.
Figure 12. Convolutional Neural Network structure.
Sustainability 16 04068 g012
Figure 13. Model validation on PSoC 6.
Figure 13. Model validation on PSoC 6.
Sustainability 16 04068 g013
Figure 14. Model prediction.
Figure 14. Model prediction.
Sustainability 16 04068 g014
Figure 15. Survey question.
Figure 15. Survey question.
Sustainability 16 04068 g015
Table 1. Model indicators.
Table 1. Model indicators.
IndicatorValue
Accuracy97.69%
Size of the classifier20,372 bytes
Number of features23
Table 2. Model performance indicators.
Table 2. Model performance indicators.
IndicatorValue
Loss0.0169
Accuracy0.9971
Precision0.9975
Recall0.9947
F1-Score0.9961
Table 3. Questionnaire questions.
Table 3. Questionnaire questions.
QuestionAnswer Variants
1. How much do you appreciate the proposed theoretical-experiential approach?Scale 1–5 (1—very much, 5—a little).
2. To what extent do you think the practical experiments helped you to better understand the theoretical concepts?
3. What was the experiment you liked the most?Free answer.
4. What practical skills did you gain from the experiments?(a) Design and implementation of AI models
(b) Use of IoT sensors and devices
(c) Data analysis and interpretation of results
(d) Others (specify).
5. Do you think the teaching materials were useful?Yes/No
6. How would you rate your experience in the hands-on experiments in this program?Scale 1–5 (1—very much, 5—a little).
7. How well do you feel this course has prepared you for a career in Artificial Intelligence?
8. What was the biggest challenge encountered in learning the theoretical materials?(a) Understanding key concepts
(b) Applying concepts in practical experiments
(c) Task management
(d) Others (specify).
9. How would you describe your overall satisfaction with this learning experience?Scale 1–5 (1—very satisfied, 5—very unsatisfied).
10. What aspects of the educational framework could be improved?Free answer.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Modran, H.A.; Ursuțiu, D.; Samoilă, C. Using the Theoretical-Experiential Binomial for Educating AI-Literate Students. Sustainability 2024, 16, 4068. https://doi.org/10.3390/su16104068

AMA Style

Modran HA, Ursuțiu D, Samoilă C. Using the Theoretical-Experiential Binomial for Educating AI-Literate Students. Sustainability. 2024; 16(10):4068. https://doi.org/10.3390/su16104068

Chicago/Turabian Style

Modran, Horia Alexandru, Doru Ursuțiu, and Cornel Samoilă. 2024. "Using the Theoretical-Experiential Binomial for Educating AI-Literate Students" Sustainability 16, no. 10: 4068. https://doi.org/10.3390/su16104068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop