The Role of Artificial Intelligence Autonomy in Higher Education: A Uses and Gratification Perspective

6
The Role of Artificial Intelligence Autonomy in Higher Education: A Uses and Gratification Perspective


1. Introduction

Artificial intelligence (AI) technologies are “the frontier of computational advancements that references human intelligence in addressing ever more complex decision-making problems” [1], including machine learning models, natural-language processing techniques, computer vision, robotics, algorithms, etc. In the field of higher education, AI is considered to be an effective tool and has been applied in several learning contexts. Industrial reports show that the size of the global AI education market was USD 1.82 billion in 2021 and is expected to grow at a compound annual growth rate of 36.0% from 2022 to 2030 [2]. In fact, AI has enormous potential in future higher education. It is expected that AI will not only serve as a tool to assist teachers in teaching, but also as an independent agent to replace human teachers in automating education. For example, Georgia State University has launched higher education courses with AI as an independent teaching assistant [3]. By automatically supervising students’ learning performance, actively tracking their learning progress, and independently offering personalized feedback and suggestions, AI educators are fundamentally changing the learning pattern of college students, the teaching methods of teachers, and the relationship between teachers and students in higher education. Given the disruptive value of independent AI educators in student learning, exploring how its autonomous design feature affects students’ intrinsic needs and intentions to use AI is of great significance for the sustainable development of higher education, the sustainable application of AI technology in the field of higher education, as well as the sustainable self-driven learning of college students.

The proliferation of AI applications in education has also attracted scholars’ attention. Most recent studies focus on the perspective of the teacher and demonstrate that through the benefits provided by the autonomy of AI educators, teachers are able to free themselves from tedious teaching tasks such as homework correction, error analysis, personalized weakness analysis, and even basic knowledge teaching. In fact, autonomous AI educators can not only replace teachers in completing certain teaching tasks, but can also actively replace students in completing some learning processes, such as collecting learning materials, developing learning plans, etc. However, how students in higher education perceive this technology and whether they are willing to use such autonomous AI educators is still unknown. Considering that the student is the most central entity in the learning process, the current study attempts to investigate how different levels of AI autonomy will change students’ perceptions and intentions to use AI educators.

Unlike primary and secondary school students, college students have developed relatively mature cognitive abilities and can recognize their own unique needs in the learning process [4,5,6]. Higher education research documents that college students are motivated to take action to satisfy their own needs during the learning process [7,8,9,10], such as seeking information to satisfy the need to acquire information, enjoying themselves and relaxing to satisfy their entertainment needs, and building a relationship with other social actors to satisfy their social interaction needs. The success of AI educators might thus potentially be highly dependent on whether AI educators can meet students’ intrinsic needs. For example, when an AI educator actively meets students’ needs, it is likely to be welcomed; if an AI educator automatically replaces students in decision-making or actions, but cannot meet their needs, it is likely to be resisted. In this regard, an important but unexamined research question arises: how does the artificial autonomy of AI educators influence students’ intention to use them by satisfying students’ needs?
Extending previous higher education studies from the student perspective, the current study aims to explore the effect of the artificial autonomy of AI educators on students’ use intentions through user gratification. We first review the literature on AI education, artificial autonomy and the U&G theory. Based on the literature, we categorize the artificial autonomy of AI educators into sense autonomy, thought autonomy, and action autonomy to capture the autonomous ability of AI educators in each step in the process of problem-solving. In addition, we focus on the most important and robust dimensions of U&G benefits, i.e., information seeking, social interaction, and entertainment. Next, we propose our hypotheses to theoretically elaborate on how different categories of the artificial autonomy of AI educators (i.e., sense, thought, and action autonomy) induce students’ usage intentions through the mediating effects of distinct U&G benefits (information seeking, social interaction, and entertainment). An online survey was performed to test the proposed conceptual model. The methodology and data analyses results are presented. Finally, Section 6 discusses our findings, theoretical contributions, and practical implications. The limitations and potential directions for future studies are also disclosed.

3. Research Model and Hypotheses Development

Research has shown that college students’ investments in learning and their media choices are more active than passive [53,55,56], and they are pickier about accepting cutting-edge technologies [54,61]. As AI education applications become prevalent, the role of college students’ gratification in the influence of AI educators’ artificial autonomy and their usage intention becomes more significant. Therefore, we propose a comprehensive framework (shown in Figure 1) to reveal how different types of artificial autonomy in AI educators affect usage intention through U&G benefits.
Drawing on the U&G theory [51,52], the literature on artificial autonomy [33,37,38], and the sense–think–act paradigm [64,65], this study provides a new perspective for understanding how AI educators’ artificial autonomy can improve their usage intention. The model proposed in this study takes into account that users proactively select AI educators when motivated by the U&G benefits. In the following, we first categorize the artificial autonomy of AI educators into sensing autonomy, thought autonomy, and action autonomy. Next, we identify the most important and robust dimensions of U&G in the literature [45,66,67,68,69,70,71,72,73,74,75], i.e., information seeking, social interaction, and entertainment, as mediators. Finally, we develop three hypotheses regarding how U&G benefits mediate the impact of artificial autonomy factors (i.e., sensing autonomy, thought autonomy, and action autonomy) on the endogenous factor of usage intention.

3.1. Categorizing the Artificial Autonomy of AI Educators

Autonomy broadly means “self-rule” [76], “self-law” [77], or “self-government” [78,79], and has been applied to various entities, such as humans, machines, and political institutions. As mentioned, artificial autonomy describes the ability of machines, systems, or robots to perform tasks independently in a human-like manner with little or no human intervention. Thus, artificial autonomy is closely linked with specific problem-solving tasks [39]. Typically, a specific problem-solving task should be addressed in three steps: sense, think, and act [64,65]. That is, the task performers first sense and identify the environmental factors, then think and reflect to generate a decision or plan, and finally act based on the plan to solve the problem. Consistent with previous studies [33,39,80,81,82,83], we categorize artificial autonomy, based on the STA paradigm, into sense autonomy, thought autonomy, and action autonomy to capture the autonomous ability of AI educators in each step in the process of problem-solving.

Specifically, sensing autonomy refers to the ability of AI to autonomously sense the surrounding environment. For example, AI educators can see things in the environment through a camera, hear surrounding sounds through sound sensors, sense what is happening around them through sensors, and recognize the user’s current biological status (such as their heart rate, skin electricity, blood pressure, etc.) through wearable devices. Thought autonomy refers to the ability of AI to make decisions or plans independently. For example, AI educators can determine the video and practice list based on the user’s progress, learning habits, and current status (e.g., emotions, flow, tiredness). AI educators can determine the start and end times of learning based on the user’s past preferences and habits. AI educators can also establish their next learning plan based on the user’s learning habits and current learning progress. These all reflect AI’s ability to think, reflect, and make decisions or plans independently. Action autonomy refers to the ability of AI to act, execute plans, or perform certain behavioral activities independently. For example, AI educators can autonomously play and stop the current video list. AI educators can autonomously remind users to complete exercise questions, grade test papers, and generate score analysis reports. AI educators can also execute teaching plans, teach specific chapters independently, and actively perform tutoring tasks.

3.2. Identifying the U&G Benefits of AI Educators

Prior studies have suggested that user gratifications for the Internet and website use can serve as a fundamental framework for AI-acceptance research. Since the early 1990s, researchers have identified several gratification factors for Internet and website use, among which information seeking, social interaction, and entertainment are the most important and robust dimensions of the U&G benefits [45,73,74,75]. However, the effects can also vary across contexts. For example, Luo, Chea and Chen [73] showed that information-seeking gratification, social interaction gratification, and entertainment gratification were crucial to improve usage behavior. Lin and Wu [45] and Lee and Ma [74] found that information-seeking gratification and social interaction gratification were positively associated with consumer engagement and users’ intention to share news, while entertainment gratification was not. Choi, Fowler, Goh and Yuan [75] demonstrated that information seeking significantly influenced user satisfaction with the hotel’s Facebook page, while social interaction and entertainment did not significantly affect satisfaction.
Information-seeking gratification reflects the instrumental and utilitarian orientation of media or technology usage. Specific to the current context of an AI educator, it refers to the extent to which AI educators can provide users with relevant and timely information [66,74,84]. For example, users can seek information and track the latest news using their learning materials, the contents generated by AI educators or shared by peer learners, and their past usage records.
Social interaction gratification reflects users’ motivation to use media or technology to contact or interact with others. Although social interaction motivations are considered to be related to human–human interaction in most cases, they have been revealed to be strongly related to human–AI interaction [44,60,85]. In our AI educator context, this refers to the extent to which AI educators can communicate with users, and establish and maintain relationships with them [86,87]. For example, AI educators can proactively express their concern to users, inquire whether users need certain services, answer users’ questions, chat with users, and establish friendships with users. Entertainment gratification represents the hedonic orientation of media or technology usage. In the context of an AI educator, it refers to the extent to which AI educators can provide users with fun and entertaining experiences [66,88,89]. For example, users can derive pleasure and entertainment from the gamified learning process, the humorous teaching style of AI educators, and vivid and interesting case studies.

3.3. Hypotheses Development

3.3.1. The Sensing Autonomy and Usage Intention of AI Educators

We first propose that sensing autonomy may help improve users’ information-seeking, social interaction, and entertainment gratifications, which further leads to a higher level of usage intention. Sensing autonomy enables AI educators to actively sense the surrounding environment by observing, listening, visually recognizing, and monitoring students’ states at any time, including collecting data from users and the environment (such as text, sound, images, location, environmental temperature, object morphology, and biological status) and further extracting information from these data. When students seek information, AI educators can autonomously sense their information needs and enable users to perceive the implementation of hands-free commands through their real-time grasp of the environment and users’ statuses. This type of autonomy allows users to perceive fast, accurate, efficient, effective, and even hands-free information responses [33,80], resulting in enhanced information-seeking gratification. For example, when students find it difficult to understand a teaching video, AI educators may monitor students’ confused expressions (such as frowning), actions (such as pausing the video or repeatedly watching), or voice (such as the voice of students asking others). Based on these variations, AI educators can recognize that students may encounter difficulties in this part of the learning content, and there may be some need for detailed explanations and relevant exercises, to proactively meet their information search needs (such as providing related pop-up information links, attaching relevant exercises after the video, and even switching directly to a more detailed teaching videos).
Furthermore, information seeking is often considered an important reason for users to use AI technology applications [90,91,92]. As one of the main functions of an educator, the use of an AI educator is an effective way for users to obtain learning-related information [93,94]. For example, users may use AI educators to seek course information, exam information, personalized practice questions, past learners’ experiences, teacher recommendations, etc. Because AI educators can autonomously provide personalized information for users based on artificial intelligence technology, respond to users’ information seeking needs at any time, allow for users to evaluate information in a timely manner, and even help users make decisions and plans based on such information, users are likely to rely on AI educators to satisfy their information seeking needs. Considering that AI usage is a utilitarian motivation-driven result [41,62,95], information-seeking gratification may enhance users’ intention to use AI educators. Therefore, we propose the following:
H1a. 

The sensing autonomy of AI educators is positively related to usage intention due to the mediating effect of information-seeking gratification.

Sensing autonomy enables AI educators to actively monitor user status variations and detect changes in environmental conditions. Therefore, AI teachers can autonomously capture real-time changes, respond to user calls at any time, and be ready for user commands at any time. For example, users can awaken Xiaomi-brand smart products through the “classmate Xiao AI” voice command. When the smart product senses the user’s voice command, it will immediately respond “I’m here” (About XiaoAI. Accessed on 23 December 2023, from https://xiaoai.mi.com/AboutXiaoai?tab=link003). Previous studies have shown that this type of autonomy causes users to clearly perceive themselves as being cared for by AI agents, and believe that AI agents are always with them, which leads to the belief that AI agents are friendly, enthusiastic, intimate, and loyal to the students [80]. Meanwhile, a large number of studies demonstrated that friendliness [96], social presence [97], intimacy [98], and loyalty [99] have significant positive impacts on relationship establishment and maintenance, which, in turn, leads to user engagement in social interactions.
Further, users are likely to interact with AI agents in a human–human interaction pattern [46,49,100,101]. For example, users may voice-command the AI educator to perform specific tasks, request that an AI educator answers questions, ask an AI educator about the course schedule, receive proactive care and reminders from the AI educator, etc. Because AI educators can interact with users in an anthropomorphic [30,40,102], personalized [30,46,101], authentic [46], and responsive [45,103,104] manner, users can clearly perceive the friendliness, care, and warmth of the AI educator [80,101,105]. Therefore, such human–computer interactions may meet or even exceed users’ expectations for social interaction. Based on the U&G theory, social interaction gratification is an important motivation that drives users’ AI acceptance [44,57,59,106], and is likely to increase users’ intentions to use AI educators. Therefore, we propose the following:
H1b. 

The sensing autonomy of an AI educator is positively related to usage intention due to the mediating effect of social interaction gratification.

As mentioned, sensing autonomy allows for users to experience a fast response from their AI teachers, and such interactions are mostly hands-free. This type of autonomy aligns with users’ existing impressions of future technology and enables them to enjoy a more techno-cool experience. That is, users can strongly feel the intelligence, competence, sense of techno-coolness, and modernity of AI educators in their interactions. The research has suggested that such a modern, futuristic, and intelligent experience satisfies users’ higher-order psychological needs, allowing them to immerse themselves and have fun using the technology [107]. The research has also argued that perceived autonomy can promote users’ intelligence perception and evoke positive emotions towards AI agents, and enhance users’ interests and enthusiasm [33,80], to provide them with a sense of entertainment [108,109].
Moreover, the AI educator’s wisdom and ready responses provide users with an entertaining and interesting experience [110,111,112]. For example, AI educators may present the teaching process in a gamified form to make learning fun [113]. Even if the questions raised by users are not related to learning, they can be responded to in a timely manner. When users feel tired or bored during study breaks, AI educators can proactively provide users with mini-games to relax their bodies and minds. Since AI educators can be designed to be relaxing, humorous, and playful [41,110,114,115], and enable users to enjoy novel, modern, and futuristic intelligent experiences [107,116], users are likely to rely on AI educators for entertainment and fun. Considering that hedonic motivation is a prominent factor driving users’ AI acceptance [41,62,90], entertainment gratification may promote users’ intention to use AI teachers. Therefore, we propose the following:
H1c. 

The sensing autonomy of an AI educator is positively related to usage intention due to the mediating effect of entertainment gratification.

3.3.2. The Thought Autonomy and Usage Intention of AI Educators

We next elaborate on how thought autonomy may help improve users’ information-seeking, social interaction, and entertainment gratifications, which further influence usage intention. Thought autonomy enables AI educators to independently integrate and analyze information, evaluate, plan, and make decisions to provide users with personalized, clear, and rational decisions or plans [33,80]. For example, AI teachers can generate learning schedules based on users’ learning habits, recommend exercises and courses based on users’ learning progress, and provide advice for course selection (such as criminal law) and vocational certificate exams (such as legal professional qualification exams) based on users’ profiles and interests (such as first-year students interested in law). Hence, when users have information search needs, this type of autonomy can enable AI teachers to actively search for and process information, which effectively reduces the chance of users facing an information overload when facing massive amounts of information, saves their cognitive efforts in information processing, and even directly achieves the ultimate goal of information searches, i.e., decision making, leading to the efficient and effective satisfaction of users’ information-seeking needs [117]. As discussed, information-seeking gratification should be positively related to students’ usage intentions. Therefore, we propose the following:
H2a. 

The thought autonomy of an AI educator is positively related to usage intention due to the mediating effect of information-seeking gratification.

Thought autonomy enables AI educators to actively collect and analyze information, including user preferences and users’ behavioral habits. Therefore, AI educators can provide personalized decision-making guidance based on the uniqueness of the users, making users feel understood by AI educators. Highly personalized decisions or plans can also make users clearly perceive that they are cared for by AI educators. The research has shown that users’ perceptions of understanding and personalization in human–computer interactions can significantly enhance their perception of social presence [102], help to establish friendships between users and virtual agents [101], and make users more willing to engage in human–computer interactions [116]. In addition, thought autonomy supports AI educators to quickly and accurately process massive amounts of relevant information, and their analytical ability even surpasses that of the human brain [118,119]. Therefore, from the perspective of users, AI educators are good teachers and friends who can provide personalized, intelligent, comprehensive, and clear guidance, and decision-making advice, thereby increasing trust in and dependence on AI educators, as well as the willingness to interact with AI teachers to obtain more decision-making advice [90,120]. As discussed, social interaction gratification should be positively associated with students’ usage intention. Therefore, thought autonomy is likely to become an important factor in usage intention due to the mediating effect of social interaction gratification. We propose the following:
H2b. 

The thought autonomy of an AI educator is positively related to usage intention due to the mediating effect of social interaction gratification.

Thought autonomy is decision making-oriented and can help users to solve problems regarding information processing and decision making. From the perspective of users, AI educators are always ready to solve problems for them; for example, by determining a learning plan for new courses, helping students to prepare for exams within a limited time, helping students overcome knowledge weaknesses, and showing how to establish mind maps. Thus, users may feel a sense of pleasure that their problems can be solved successfully, and perceive that their needs are valued and fulfilled by AI educators. In addition, this type of autonomy can create an intelligent experience for users, increase their interest in using AI teachers, and enhance their happiness and enthusiasm during the use process [80]. As discussed, enhanced entertainment gratification will increase students’ intention to use AI educators. Therefore, thought autonomy is likely to lead to a higher level of usage intention by enhancing entertainment gratification. We propose the following:
H2c. 

The thought autonomy of an AI educator is positively related to usage intention due to the mediating effect of entertainment gratification.

3.3.3. The Action Autonomy and Usage Intention of AI Educators

We finally hypothesize that action autonomy may help to improve usage intention by enhancing information-seeking, social interaction, and entertainment gratifications. Action autonomy enables AI educators to independently complete tasks with minimal or no user intervention, including device operation, sound or video playback, information searches, and proactive reminders to users. This type of autonomy can enable AI educators to serve as users’ agents in a hands-free manner. That is, AI educators can perform certain actions without the need for users to manually issue commands [99]. When users need to search for certain information, action autonomy may allow AI educators to complete the information search and proactively send it to the user before manual intervention and even further perform relevant behaviors beyond the user’s expectations [121,122]. This may make information searching more efficient and enhance users’ information-seeking gratification. As discussed, information-seeking gratification will increase students’ intentions to use AI educators. Thus, we propose the following:
H3a. 

The action autonomy of an AI educator is positively related to usage intention due to the mediating effect of information-seeking gratification.

Action autonomy enables AI educators to act as agents for users; that is, to replace users in taking action. For example, AI educators can turn on devices without user intervention, automatically download learning materials, perform grading, automatically play teaching videos according to an established course schedule, and issue time alerts to users during exams. When AI performs tasks on behalf of users, users can feel the friendliness, care, kindness, and assistance of AI educators, perceive the intimacy between themselves and AI educators, and establish and maintain social relationships with them [33,80]. This may lead to a higher level of social interaction gratification. Furthermore, as previously discussed, social interaction gratification will increase students’ intention to use AI educators. Therefore, we propose the following:
H3b. 

The action autonomy of an AI educator is positively related to usage intention due to the mediating effect of social interaction gratification.

Action autonomy allows for users to enjoy AI educators’ autonomous action execution and serving behavior without additional input, which is consistent with human lay beliefs in future technology. As mentioned, when AI meets users’ high-level psychological needs for techno-coolness, users can have fun using AI applications and generate enthusiasm for using AI [107]. Previous studies have also pointed out that AI-enabled virtual assistants with action autonomy can lead users to enjoy intelligent experiences and encourage positive emotional responses [123,124]. Therefore, we believe that action autonomy can satisfy users’ hedonic pursuits of futuristic and modern intelligent experiences, which further leads to increased entertainment gratification. As discussed, enhanced entertainment gratification will increase students’ intention to use AI educators. Thus, we propose the following:
H3c. 

The action autonomy of an AI educator is positively related to usage intention due to the mediating effect of entertainment gratification.

Taken together, our conceptual model is presented in Figure 2.

4. Methods

4.1. Sampling and Data Collection

Due to COVID-19, students have widely adapted to online education. An increasing number of brands and schools are trying to develop “AI teachers” products that provide AI-based intelligent learning services for students. For example, an AI educator developed by iflytek is responsible for teaching knowledge for all subjects in primary and secondary schools, interacting with students, and generating personalized mind maps. The AI educator Khanmigo, developed by Khan Academy, can teach students mathematics and computer programming. In general, online AI-based education applications are now able to cover the full scope of teaching, self-learning, and exam preparation through AI applications, generate personalized knowledge graphs for students, offer intelligent correction and rapid diagnosis, identify the weak points behind incorrect questions, and provide targeted practice exercises. In addition, several AI teachers are able to afford multi-scenario dialogues, supervise students’ learning, and accompany students in their daily lives through humorous real-time interactions.

The target participants in this study were college students. Following previous studies, considering that undergraduate students, master’s students, and doctoral students (1) all study in the environment of colleges, (2) have the need for self-motivated learning, and (3) need active and specialized teaching provided by AI educators, we incorporated undergraduate students, master’s students, and doctoral students as college students into our subject pool [125,126,127]. Participants were recruited through the Credamo platform (www.credamo.com), which is one of the largest research platforms in China. Due to the fact that the vast majority of education brands and schools are currently focused on developing AI education platforms for primary and secondary schools, there are still relatively few AI education applications for college students. To eliminate the potential impact of having experience in the use of AI educational applications, this study was only aimed at college students who had not used any AI educational applications. Following Malhotra and Galletta [128], this study adopted a purposeful sampling method. We used purposive sampling based on two criteria: individuals (1) must be college students, including undergraduate, master’s students, and doctoral students, and (2) must have never used AI education services.

To provide participants with examples of AI educators, this study first provided them with a video introduction (50 s) describing the services provided by AI educators. The video showcases an AI education application based on tablets from the perspective of students. In the video, students can interact with an AI educator by clicking on the screen, through a voice wake-up, or via text commands. The online teaching functions of AI virtual educators are introduced, including online course teaching, intelligently tracking students’ progress in real-time, independently analyzing knowledge gaps and visualizing mind maps, automatically developing personalized learning journeys, and actively interacting with students (e.g., answering questions, reminding students to start learning). Participants were told that this AI education application for college students has not yet been launched on the market, so the brand hoped to investigate the attitude of college students towards the AI-teacher product before the product was launched. The brand name was hidden, and no brand-related information was provided in the video. After watching the video, participants were asked to evaluate their perception and attitude towards the AI-educator product. We first conducted a pilot study by collecting 50 responses from college students to the questionnaire and made some minor modifications in terms of language and clarity. All participants in the pilot study were excluded from the main survey. A total of 673 unique responses were collected in November 2023.

4.2. Measurement Scales

The measurement scale of this study is divided into two parts. The first part is the measurement scales of the conceptual model. Measurement items for all constructs were adopted from scales in the previous literature, as shown in Table 1. Sensing autonomy was measured by four items borrowed from Hu, Lu, Pan, Gong, and Yang [33] (α = 0.880). The four items on thought autonomy were adapted from Hu, Lu, Pan, Gong, and Yang [33] (α = 0.836). The four items of action autonomy were borrowed from Hu, Lu, Pan, Gong, and Yang [33] (α = 0.903). Information seeking gratification was measured by three items adapted from Lin and Wu [45] (α = 0.810). Social interaction gratification was measured by a five-item scale adapted from Lin and Wu [45] (α = 0.822). Entertainment gratification was measured using three items adapted from Lin and Wu [45] (α = 0.763). The three items on usage intention were borrowed from McLean and Osei-Frimpong [60] (α = 0.845). All items were measured by the Likert 7-point scale, in which the degree of agreement ranges from strongly disagree to strongly agree, represented by 1–7. The second part collects demographic information of the participants, including gender, age, grade level, experience of using AI educators, experience of using AI applications other than AI educators, and experience in participating in online education.

4.3. The Profiles of Respondents

After excluding invalid responses, in which the demographic information did not match the sampling criteria of this study, 673 valid responses were collected. The specific statistical results of respondent profiles are shown in Table 2.

Firstly, 69.39% of the respondents were undergraduate students aged 17 to 22, with 51.18% being female. In total, 15.63% of the respondents were first-year students, 18.85% were second-year students, 35.33% were third-year students, and 30.19% were fourth-year students.

Secondly, 24.96% of the respondents were master’s students aged 21 to 25, with 48.21% being female. In total, 55.36% of the respondents were first-year master’s students, 32.14% are second-year master’s students, and 12.50% are third-year master’s students.

Finally, 5.65% of the respondents were doctoral students aged 22 to 29, with 50.00% being female. In total, 71.05% of the respondents were first-year doctoral students, 18.42% were second-year doctoral students, 10.53% were third-year and above doctoral students.

Overall, 50.37% of the participants were female. No participants had used AI educators, and those who had used were labeled as invalid samples. A total of 84.84% of participants had used AI applications other than AI educators, and only 15.16% of participants had not used them. Regarding experience in participating in online education, 89.01% of participants stated that they frequently participated, 6.98% of participants stated that they have participated, but not much, and 4.01% of participants had almost never participated.

6. Discussion

Drawing on the uses and gratification theory, our study aims to analyze how the artificial autonomy of an AI educator leads to students’ usage intentions by improving user gratification. Specifically, regarding the U&G benefits, we focus on the three most salient and robust dimensions: information-seeking gratification, social interaction gratification, and entertainment gratification. Previous research has highlighted the various categories of AI autonomy and their importance; however, to our knowledge, there are few studies that classify autonomy into multiple types, as distinct influencing factors for usage intention. To this end, this study proposes a novel theoretical model that takes the sensing autonomy, though autonomy, and action autonomy of AI educators as factors of intention to use and examine the intermediary role of user gratifications (i.e., information-seeking gratification, social interaction gratification, and entertainment gratification). By doing this, our findings provide new insights into perceptions of how artificial autonomy motivates users to use an AI educator through multiple gratifications.

The findings reveal that three types of AI educator autonomy are associated with different user gratifications, which extends the findings of recent studies on the influence of artificial autonomy in sustainable higher education. Firstly, our study demonstrates that the sensing autonomy of AI educators is positively related to usage intention through the mediating effects of social interaction gratification and entertainment gratification. Contrary to our proposed hypothesis, our findings showed no significance in the relationship between sensing autonomy and information-seeking gratification, resulting in an insignificant indirect effect of information-seeking gratification in the impact of sensing autonomy on usage intention. The possible explanation for this is that, unlike primary and secondary school students, the learning process of students in higher education is often self-driven, self-determined, and self-regulated [140,141,142,143]. Students in higher education more actively engage in their own learning, know how to learn, and understand how to find solutions for their learning problems. Thus, their information-seeking needs exist more in their mind and less in external expressions that the AI can detect. For example, when there is an information-seeking need, students may not communicate with their classmates (allowing for their voice to be sensed by the AI educator), while they are more likely to start searching directly. Although AI educators are able to sense changes in students’ statuses, such as their pausing a video, they still cannot fulfill students’ need for information seeking because the students have already taken action.
Second, our findings show that the thought autonomy of AI educators is positively related to usage intention due to the mediating effects of information-seeking gratification and social interaction gratification. However, contrary to the proposed hypothesis, the indirect linkage between thought autonomy and usage intention is not significant because of the insignificant influence of thought autonomy on entertainment gratification. A possible reason for the non-significant relationship between thought autonomy and entertainment is that thought autonomy, which enables an AI educator to generate decisions or plans, is more problem-solving-oriented and associated more with utilitarian goals than hedonic goals [144,145]. Additionally, prior studies found that although AI autonomy may increase users’ passion [80], it also may lead to negative experiences such as techno-stress [50]. Thus, it is possible that students who were served by AI-educator-offered decisions or plans focus more on utilitarian needs, such as evaluating whether the decision is optimal and whether the plan is feasible, so they cannot relax and entertain themselves.
Third, our findings indicate that the action autonomy of AI educators is positively related to usage intention by increasing information-seeking and entertainment gratification. However, the proposed hypothesis about the indirect linkage between action autonomy and usage intention through the mediating path of social interaction gratification was not supported. A possible explanation for the unverified relationship between action autonomy and social interaction gratification is that, because AI with a high degree of autonomy can complete tasks autonomously without human intervention, this leaves users with no opportunity for human–machine interaction [146,147]. For example, AI educators with a lower level of action autonomy require clear commands from users before taking action. During the interaction process of giving and receiving commands between users and AI educators, users may perceive a sense of interaction with social agents and further establish social connections with AI educators. However, AI educators with a higher level of action autonomy can autonomously take actions without commands from users. As a result, users need not interact with the AI educators, making it difficult to establish perceptions of social connection with AI educators.

6.1. Theoretical Contributions

Our study makes several contributions to existing knowledge. First, most prior studies in the AI-education literature focused on the perspective of teachers, such as what an AI educator can do for human teachers and whether human teachers are willing to accept AI educators, while very little research paid attention to the perspective of students. In addition, among the few studies regarding students’ usage of AI educators, most efforts were devoted to primary and secondary education, while less were allocated to higher education. Moreover, previous studies seldom delved into the effects of the specific design features of AI educators. Our findings contribute to the sustainable education literature relating to AI-driven education by disclosing its power in utilizing technology design to satisfy students’ intrinsic needs in sustainable education. Our study developed a theoretical model that integrated artificial autonomy, which is one of the most important features of an AI educator, and user gratifications as factors influencing the usage of students in higher education. Specifically, our findings reveal that the sensing autonomy of AI educators is positively related to usage intention through increased social interaction and entertainment gratification; thought autonomy enables an AI educator to fulfill students’ information-seeking and social interaction needs and thus increase students’ usage intention; while action autonomy helps the AI educator to fulfill the demands for information seeking and entertainment to induce usage intention. This finding reconciles the concerns in prior research from the teacher perspective [11,148], and is aligned with recent studies from the student perspective [149,150], which emphasize the importance of research from both perspectives. Thus, our study highlights the importance of the artificial autonomy of AI educators in enhancing usage intention through user gratification, and further offers a comprehensive angle for future research to understand the differentiated power of three types of artificial autonomy on distinct gratifications from the student perspective.

Second, we contribute to the artificial autonomy literature by disclosing its power in determining user gratification and usage. Although there are booming studies on the effects of AI design features, such as anthropomorphism, responsiveness, and personalization, very few efforts have been devoted to examining the effect of artificial autonomy. More importantly, prior studies have reached mixed conclusions on the influence of artificial autonomy. Our study provides an integrated perspective to investigate how different types of artificial autonomy affect distinct user gratification and further influence usage intention in the context of higher education, which, to some extent, can reconcile the mixed findings. Specifically, our findings show that students in higher education are motivated to use AI educators by different benefits, and the different benefits are influenced by distinctive types of artificial autonomy. For example, we find that sensing autonomy enables AI educators to fulfill social interaction and entertainment needs, but is not able to increase information-seeking gratification. The thought autonomy of AI educators increases students’ information-seeking and social interaction gratifications, but is not related to entertainment gratification. Action autonomy induces students to use AI educators through their information-seeking and entertainment motivations, but cannot motivate student usage by satisfying social interaction needs. Therefore, our findings emphasize the nonidentical effects of artificial autonomy in AI educators on students’ usage intention through the dynamic mediating paths of multiple user gratifications.

Third, our study reveals the significant power of leveraging the U&G theory to investigate the impact of AI design features on AI usage intentions. The U&G theory has a long history of development. A large number of scholars have drawn on this theory to investigate the antecedent factors and consequent outcomes of multiple gratifications. However, very few previous studies have drawn on the U&G theory to examine the role of artificial autonomy in improving AI usage. Our findings disclose the power of the U&G theory in two ways. With regard to the antecedent factors of gratification, our findings disclose different factors of distinct gratifications. Specifically, students’ information-seeking gratification is positively associated with the thought autonomy and action autonomy of AI educators. Social interaction gratification is increased by sensing autonomy and thought autonomy. Entertainment gratification is enhanced by sensing autonomy and action autonomy. Regarding the consequent outcomes of gratifications, we find that, in the context of higher education, information-seeking, social interaction, and entertainment gratifications are all positively related to AI educator usage intentions. Our findings highlight the distinct role of different types of user gratification in the effects of AI autonomous features on usage intention, which extends the extant understanding of the effects of artificial autonomy and how students’ use of AI educators is driven by different motivations in the context of higher education.

6.2. Practical Implications

Our study also has several practical implications. Although AI education is not a new concept, AI technology is still far from widespread in higher education. How AI educators should be designed to promote students’ use intention remains a challenge for suppliers and designers. While it is important for higher education schools and teachers to implement innovative technologies from a sustainable perspective, for those technologies to be deeply involved in the learning process, it is first necessary to understand how students perceive the technologies, particularly their different motivations to use them. In other words, better understanding student gratification and the intention to use an AI educator is a critical first step in implementing AI technology to effectively improve the sustainable development of higher education. Our findings highlight important areas that the suppliers and designers of AI educators need to consider, such as the autonomous design of AI educators and the gratifications that motivate students in higher education to use AI educators.

First, our study offers insights from the student perspective into how students perceive and react to AI educators with different types of artificial autonomy. Our findings provide specific guidelines for the suppliers of AI educators to consider, such as the important roles of information-seeking, social interaction, and entertainment gratifications in inducing students to use AI educators. Additionally, while all three gratifications were identified as significant benefits that should be associated with AI educators, it is important for suppliers to understand that students in higher education may pay more attention to particular gratifications in their autonomous AI educator usage. Thus, it is recommended to give the highest priority to satisfying students’ distinct needs according to the autonomous design of AI educators when suppliers do not have sufficient capacity to guarantee all gratifications.

Second, our findings identify important autonomous features of AI educators for designers to consider when designing differentiated AI educators. The findings of our study show that sensing autonomy plays a significant role in social interaction and entertainment gratifications, thought autonomy is essential for information-seeking and social interaction gratifications, while action autonomy is critical to increasing information-seeking and entertainment gratifications. When designing AI educators with different usage purposes, such as designing for social interaction with students, or designing for students seeking information, designers should consider corresponding strategies that attach different types of autonomous features to AI educators to enhance students’ usage intentions.

Third, our study provides specific guidelines for the alignment of suppliers and designers of AI educators. In some cases, the requirements proposed by the supplier cannot be met by the designer, and our findings offer possible solutions to such contradictions. For example, when designers are unable to provide the AI educator feature of thought autonomy required by suppliers, our findings suggest that designers can provide action autonomy to meet users’ information-seeking needs and provide sensing autonomy to satisfy users’ entertainment needs, to achieve similar effects as the provision of thought autonomy. Similarly, when sensing autonomy cannot be offered, our findings demonstrate that designers can provide thought autonomy to increase social interaction gratification and provide action autonomy to enhance entertainment gratification. When suppliers require a higher level of information-seeking and entertainment gratifications, which should be induced by action autonomy, our findings recommend that designers attach a sensing autonomy feature to satisfy the need for entertainment and to increase thought autonomy to improve information-seeking gratification.

6.3. Limitations and Future Directions

This study still suffers from several limitations, which provide possible directions for future research. First, our study collected data from 673 college students in China. Future research is recommended to take cultural factors into consideration and extend our research model to other countries. Secondly, this study adopted a survey method to verify the influence path of AI-educator autonomy on students’ usage intentions and the mediating role of U&G benefits. Future research may delve into the basis of this study, such as by using experimental methods to manipulate the high and low levels of artificial autonomy to measure its impact on college students’ intentions to use AI educators, and the mediating role of gratifications, or verify the generalization of our findings in field settings. Third, this study enables participants to understand the core functions and usage experience of AI educators through video-viewing, ensuring that participants have a basic understanding of AI educators. With the implementation of AI education applications in the field of higher education, future research can use scenarios based on specific AI education applications and collect data from college students who have actually used the AI educator to verify our research findings. Finally, this study did not distinguish between types of college students, such as university level and professional disciplines. Future research may compare different types of college students to explore the possible boundary conditions of our proposed model.


Disasters Expo USA, is proud to be supported by Inergency for their next upcoming edition on March 6th & 7th 2024!

The leading event mitigating the world’s most costly disasters is returning to the Miami Beach

Convention Center and we want you to join us at the industry’s central platform for emergency management professionals.
Disasters Expo USA is proud to provide a central platform for the industry to connect and
engage with the industry’s leading professionals to better prepare, protect, prevent, respond
and recover from the disasters of today.
Hosting a dedicated platform for the convergence of disaster risk reduction, the keynote line up for Disasters Expo USA 2024 will provide an insight into successful case studies and
programs to accurately prepare for disasters. Featuring sessions from the likes of The Federal Emergency Management Agency,
NASA, The National Aeronautics and Space Administration, NOAA, The National Oceanic and Atmospheric Administration, TSA and several more this event is certainly providing you with the knowledge
required to prepare, respond and recover to disasters.
With over 50 hours worth of unmissable content, exciting new features such as their Disaster
Resilience Roundtable, Emergency Response Live, an Immersive Hurricane Simulation and
much more over just two days, you are guaranteed to gain an all-encompassing insight into
the industry to tackle the challenges of disasters.
By uniting global disaster risk management experts, well experienced emergency
responders and the leading innovators from the world, the event is the hub of the solutions
that provide attendees with tools that they can use to protect the communities and mitigate
the damage from disasters.
Tickets for the event are $119, but we have been given the promo code: HUGI100 that will
enable you to attend the event for FREE!

So don’t miss out and register today: https://shorturl.at/aikrW

And in case you missed it, here is our ultimate road trip playlist is the perfect mix of podcasts, and hidden gems that will keep you energized for the entire journey

-

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More