Abstract

Over several centuries, computational capabilities have enhanced the ability of robots to become integrated with human lives. With this development, significant challenges and benefits have arisen. Informed design and awareness of ethical, technical, and societal risks are critical to ensure the continued prudent development of social robots. We first broadly examine social robots and their history. We then discuss physical characteristics and principles to enhance positive public perception of social robots. Further on, we enumerate potential threats that arise with ubiquity and their potential solutions. We also analyse the current and potential future impacts of social robots in various industries. The future of human-robot interactions is unknown, but by understanding the many aspects of social robots we can enhance positive cooperation with these robots for the betterment of society.

1. Introduction

1.1. The Evolution of Robots

Since the development of the Antikythera Mechanism, one of the first computing tools which was used for astronomical calculations (Freeth, 2022), technology has been at the forefront of human advancement. Recently, the increasing capabilities of computing technology have resulted in the production of more independent machines. Often referred to as “robots”, these machines possess the ability to reshape human interaction with the world drastically.

A robot can be defined as a machine that moves with a degree of autonomy (Fosch-Villaronga et al., 2020). Based on the works of Goldstein (1978), Wolff (1970), and Scanlon (1972), cited in The Nature of Autonomy, we define autonomy as a level of sovereignty or understanding that pre-programmed rules are alterable based on the robot’s surroundings and decision-making capabilities (Dworkin, 2015).

The concept of robotics became popular after Joseph Capek wrote the short story Opilec and his brother Karl Capek wrote the play Rossum’s Universal Robots (Hockstein et al., 2007). Karl Capek’s play is centred around the rapid growth of technology and the evolution of robots with increasing capabilities. He included how, eventually, the robots would revolt against humans. 

Robots have, throughout history, influenced countless aspects of society. Iavazzo et al. (2022) enumerate certain developments throughout history:

  • Ancient Greece: Aristotle envisioned a process in which each aspect of production eliminated the next. Thus, in place of the then heavily-utilised slavery, robots would bear the workload. Greek society grew to reflect this, from the creation of a steam-powered bird to water clocks. Notably, the Greeks developed the Antikythera Mechanism, a gear system speculated to be an accurate calendar (pp. 4-5).
  • Al-Jazari’s Band: In the twelfth century, Arab engineer Al-Jazari created a musical automaton (see Figure 1). These automatic musicians could play different rhythms based on the positions of certain pegs (p.7).
  • Enlightenment: From DaVinci’s Vitruvian Man to the Mechanical Turk (a chess-playing robot), robotic development was at the forefront of the Enlightenment (pp. 7-8).
  • 20th Century: After Capek’s play, robots became mainstream popular culture. The idea of ethics in robotics became prevalent with Isaac Asimov’s development of his ‘Laws of Robotics,’ which implored robotic development to emphasise subordination to humans (pp. 8-9).

Figure 1: Al-Jazari’s automaton musical band (Velasco, 2016, para. 1).

The development of robotics was also common. Devices were developed for menial tasks, such as measuring weight, as well as important medical developments like aids for quadriplegics. Robots began to be used as aids in scientific disciplines like medicine (Iavazzo et al., 2022).

Modern Advancements

Due to modern technological advancements, robots possess an unprecedented ability to interact with other humans. Reinforcement learning and computer vision are new means of observing the robot’s environment and learning while in deployment (Obaigpena et al., 2024). One such advancement is the development of Large Language Models (LLMs). Able to interact with users with remarkably low temperatures (high creativity), these models revolutionise social robotics in medicine, education, and other interaction-centric fields. Models such as ChatGPT and Google Gemini possess “emergent abilities” – abilities not found while applying applicable scaling to their smaller predecessors (Wei et al., 2022).

As robots grow ubiquitous in modern social spheres, it is important to note key principles about their development and use. Though legally, there is much uncertainty surrounding robots (Fosch-Villaronga et al.), in tandem with engineers and researchers in charge of design, policymakers, ethicists, and other stakeholders are critical to ensure productive development (Obaigbena et al., 2024).

1.2. Social Robots

The definition of a social robot is a topic of ongoing debate, resulting in multiple interpretations. Bartneck et al. (2005) define social robots as autonomous or semi-autonomous robots that interact and communicate with humans by following the behavioural norms expected by the people with whom the robot is intended to interact. This definition implies that a robot should possess a physical presence and imitate human activities, as well as the surrounding societal and cultural norms. In order for social robots to be effective, they should imitate emotions well enough to have humans develop attachment, they need to have basic physical features to appear alive to a human, and they have to be compatible in niche social circumstances, as to not make the interaction feel artificial to the human (Bartneck et al., 2005). If these conditions are not met, social robots in the future might not be as compatible with society as they could be.

Emotional Perception

Being able to perceive and understand human emotions is one of the most important jobs of any social robot. However, an important point to consider is that since robots are programmed to replicate certain emotions in certain settings, they may not be truly “feeling” them (Peña and Tanaka 2020). This is because the programmers design them to “feel” certain things at certain times, adding rationality to emotion. This is unrealistic since emotions in humans are irrational: the change in body temperature and the release of chemicals in the brain cause impaired judgement and can lead to decisions with little logic. Social robots so far have perfectly logical judgement and decision-making; once the finest social robots in the world can perfectly replicate irrational judgement 1:1 to the human brain, there will be very little difference between humans and androids (Peña and Tanaka 2020).

Minimal Physical Features Requirement

If humans are able to emotionally connect with the object in question, it might make the field of social robotics much more fascinating – humans may be more likely to treat the robot with respect or like it’s alive in a similar way to a human. In order to properly emotionally connect though, a human should recognise the entity as something with a consciousness that can feel and understand life in a similar way; this can typically be achieved through certain humanising features, such as an emotive face with eyes and a mouth (Martini et al, 2015).

Compatibility with Humans

Robots are becoming more common in homes, hospitals, shopping malls, factory floors, and other human environments. Human society operates on mutually accepted social norms, and adhering to these norms is a key indicator of social participation. For robots to be socially compatible with humans, it is essential for them to follow these social norms (Barchard et. al. 2019).  Due to the inherent difficulty in replicating emotions, any subtle nuances in conversations that robots are not programmed to understand might be perceptible to the human, potentially resulting in interactions between social robots and humans that feel artificial and unrealistic (ibid.). If social robots do not align with even subtle human social norms, it may decrease the likelihood of humans treating them as equals or perceiving them as having consciousness (ibid.).

Example of Social Robots

Sophia is a humanoid robot designed to look, act, and sound like a human (see Figure 2). This innovation raises an intriguing question: as technology advances, will humans be able to distinguish between a human and an android based solely on appearance and behaviour? Fuchs (2022) suggests that Sophia may represent an early stage of achieving perfect emotional compatibility between humans and robots. Sophia serves as an excellent icebreaker, opening humanity’s eyes to the potential of artificial intelligence (AI). Soon, AI may become so powerful that many humans will likely question the nature of their own consciousness (ibid.).

Figure 2: The robot Sophia (Abbass, 2017, para. 1).

1.3. Design Principles of Social Robots

Although social robots may be used to streamline many tasks, they also often invoke fear among users. Public acceptance is important, as users must be willing to interact with the robots for them to be considered social. Certain design principles may be used to alleviate these concerns.

Ethical and Legal Concerns

First, for a social robot to operate, it collects some data from its environment (in accordance with our previous definition derived from Onyeulo and Gandhi (2020), asserting that the robots must have some degree of autonomy). However, data collection can be dangerous. Data breaches, misuse, and a lack of transparency may cause users to distrust social robots.

Another common concern with the modern development of robots is a fear of replacement. As previously established, the human reception of a robot largely influences whether society will trust the machine to perform its job. In fact, studies analysed by Naneva et al. (2020) demonstrate that, although trust and anxiety about social robots are often neutral, acceptance is generally scarce.

After consulting experts at various ELS (ethical, legal, societal) conferences, Frosch-Villaronga et al. (2020) describe certain design principles to mitigate these user concerns. Firstly, data privacy can be separated into three main categories (ibid.): full consent to all collection, ongoing consent to anonymous data collection, and full rejection of data collection. While most users fall within the second category, the practice of data minimisation – the reduction of data collected – may dispel fears of data misuse. For instance, rather than videography, vibration detection may allow robots to interact with their environments without collecting detailed data about their users. Secondly, the authors suggest a human-in-the-loop design. Rather than replacing human labour altogether, it is possible that social robots act as tools. Supplementing human efforts rather than replacing them, humans may remain essential to building, repairing, and working with robots in a variety of fields (ibid.). For instance, while a robot may engage in day-to-day physical therapy with disabled patients, a human physician may still be involved in larger decision-making and progress analysis.

Physical Structure

For a robot to successfully engage in social interactions with humans, it should have a “degree of anthropomorphic quality” (Onyeulo & Gandhi 2020, p.2). Those qualities, whether physical or behavioural, make the user feel more engaged in the interaction and keener to interact.

Lee et al. (2016) conducted a case study to investigate user reactions to various robot designs. The study presented participants with a robot called Geminoid, which closely resembled a human. Interestingly, rather than focusing on the robot’s realistic facial appearance, participants paid more attention to its facial expressions. Many participants remarked that the robot appeared angry and expressed a preference for it to smile. Additionally, when introduced to robots with different eye designs, participants responded negatively to large eyes, feeling as though they were being watched, and found overly realistic and detailed eyes to be “startling”. The study revealed that it is not the human likeness of the robot that attracts users, but rather the emotional cues conveyed by its features (Lee et al., 2016).

In natural communication, humans pay attention to emotions expressed through facial expressions, hand gestures, and voice. So, some way for the robot to communicate either verbally or nonverbally is imperative to a successful social robot. By giving robots physical qualities such as eyes and a mouth, the robot is able to display emotion, which results in the user most likely treating the robot as a socially-aware entity because they are getting visual feedback (Onyeulo & Gandhi, 2020).

Social robots do not need the ability to speak coherently to engage in meaningful interactions with humans. The Kismet Humanoid Robot was created by a group of researchers, students, and undergraduate students at MIT (Onyeulo & Gandhi 2020). This robot could not speak, but it could express itself through “vocal babbles” (ibid., p.7). It also expressed itself through facial expressions such as raised eyebrows, leaning forward or backward, looking away from the user, and more. When Kismet wanted the user to respond, it looked at the eyes of the user and leant forward to show interest in what the user had to say, similar to how humans show interest in their conversations with others. 

In conclusion, social robots’ anthropomorphic features and design have a big influence on how people interact with them. Although human-like features can increase user engagement, users are more interested in the expression and conveyance of emotions. Merely possessing realistic facial features is insufficient; a robot’s ability to convey emotions via facial expressions, gestures, and nonverbal cues is essential for understanding and interacting with them.

Public Perception

A social robot should be designed catering to the environment it is going to be in. A study of college and graduate students showed that people prefer human-like robots for jobs that need social interaction, such as museum guides, dance teachers, or retail/office clerks (Onyeulo & Gandhi 2020). In contrast, machine-like robots are favoured for more “conventional” jobs, such as security guards, custom inspectors, or lab assistants (ibid., p.3). However, the robot’s job should influence its look: a robot performing medical tasks on a patient in a hospital would be more trusted if it appeared serious rather than cheerful. This suggests that a robot’s design, including facial expression and demeanour, should match its job to be most effective.

2. The Impacts of Social Robots on Societies

Social robots, designed to interact and communicate with humans, have become increasingly prevalent across various fields, demonstrating significant impacts on societies. These robots are not merely mechanical devices; they are sophisticated systems capable of interpreting and responding to human emotions, behaviours, and social cues. The integration of social robots into everyday life has ushered in transformative changes, particularly in education and healthcare, while also prompting profound questions about the nature of human identity and the ethical implications of advanced robotics. In this section, we explore the multifaceted roles of social robots in different sectors, and examine their potential to reshape societal norms and individual interactions.

Education

Social robots have shown a transformative impact on education by enhancing both cognitive and affective learning outcomes. These robots foster greater engagement and motivation among students compared to traditional educational tools. By providing interactive and personalised learning experiences, they address individual needs and adapt to students’ responses, which leads to more effective learning (Anwar et al, 2019). The physical presence of social robots often results in increased participation and attentiveness, as they can offer immediate feedback and create a more engaging learning environment (ibid.). Additionally, social robots support social learning by facilitating collaborative activities and positive interactions among students (ibid.). Therefore, they can interpret social cues and respond with appropriate behaviours, which helps in maintaining a stimulating and supportive educational atmosphere.

Medication

Social robots have various usages in the field of medicine, typically therapeutic (physical therapy or psychotherapy), but have notably critical usages in geriatric care. Social robots are nearly perfect in replacing caretakers in places like retirement homes, either entertaining the patients by performing cognitive tests or helping the patients in everyday life by transferring or washing them (Seifert et al, 2022). Social robots also do not make mistakes in extremely important handy tasks like sorting medication for older patients – something the patient could very likely struggle with (Wilson et al, 2016). Another helpful usage of social robotics in retirement homes for elderly patients could be animal therapy; for example, “Paro” the robot dog helps relieve loneliness by offering meaningful interaction (Seifert et al, 2022).

Identity of Humanity

As technology advances further in the field of social robotics, the questions will begin to arise even further – what differentiates a human from an android? Should androids be treated like humans? Would androids have free will, or would they be the property of the consumer/manufacturer? How could android life be different from human life if they experience emotions all the same? Are AIs truly experiencing emotion, or simulating it? These questions all lead to the same point: do social robots experience “life” as we know it, or are they biologically incapable of understanding our very existence the way humans do? 

When humans develop these humanoid replicators, they tend to humanise them. Because of this, simulated reciprocity between humans and social robotics comes into question, as that could cause problems. More specifically, studies should verify if social robots, and the very nature of social interactions regarding “simulated” emotions, have serious long-term psychological effects on children or elderly humans (Seifert et al, 2022).

The nature of “emotionality” is different in humans and robots: human emotions are caused by chemicals released in the brain in response to certain exterior stimuli, while robotic emotions are triggered by certain conditions being met in their processors (Seifert et al, 2022). This has caused a debate on whether social robots fit the criteria for something that is alive; if social robots eventually are deemed living creatures, it could change the outlook on social dynamics and what defines “interaction” as a whole (ibid.). If they are not deemed living creatures, the identity of a human or a living creature will remain more biologically based, likely due to emotions being declared impossible to replicate in the field of robotics.

The debate of social robotic autonomy is also extremely important to consider. If social robots are dubbed living creatures, individual rights, free will, and the absence of external causation will be heavily considered (Seifert et al, 2022). If humans are able to create artificial “life”, the identity of a living creature will be changed forever in history: it would be the beginning of a brand-new race and it would open unlimited doors to new “species”, changing the world as we know it forever (Seifert et al, 2022).

3. Challenges of Social Robots

After examining the impacts of social robots on societies, it is important to consider the key challenges associated with these robots. Addressing both current and potential future challenges is essential for optimising HRI. Current challenges include concerns about robots in the workplace causing issues such as worker alienation and the potential misuse of technology such as deep fakes (Onyeulo & Gandhi 2020). In contrast, challenges related to superintelligence are more speculative and there is no significant data that it might pose significant issues in the future. Understanding these challenges, and many others like them, should assist humans in their long-term plans for HRI.

Robots in the Workplace

A major concern of the rising prevalence of robots is replacement. Autonomous robots were initially designed to perform tasks independently from human workers. However, their roles have evolved, and robots are now being assigned tasks alongside humans and functioning as coworkers rather than mere assistants (Onyeulo & Gandhi 2020, p.1). This shift has led to automated systems replacing the jobs of people, specifically the group of unskilled workers, resulting in long-term unemployment. General Electric predicts that they will replace almost half of their 37,000 assembly workers with robots (Chijindu et al, 2012).

Unemployment is a big threat to consider, but by introducing robots into the workplace, another concern such as “worker alienation” may occur as well (Klafter et al., 2006 cited in Chijindu et al, 2012). This term means that workers will spend their entire workday interacting solely with robots and in that process, the worker will be “competing” with an entity that performs without tiring, produces the same results each time with minimal variety, and does not need to take time off, etc. (Kurzwell, 1999, cited in Chijindu et al, 2012). Mentally, working in these factories with a coexistence of robots and humans can become draining and have negative consequences on humans. 

However, in the past when a new technology has been introduced, there have been more jobs created than jobs displaced, but whether this will apply to the introduction of robots is uncertain (Resenblatt, 1982 cited in Chijindu et al, 2012). If jobs are created, they will require a different level and type of skill, but as the technological world furthers, humans will be required to adapt along with it (Ayres and Miller, 1981, 1982, 1983 cited in Chijindu et al, 2012). 

Deep Fakes

Another challenge related to social robots is that releasing artificial intelligence (AI) systems to the public has led to an increase in the spread of misinformation and deep fakes (Lu et al., 2022). AI systems give the user the power to create a believable image/video/audio about anything they want; when that power is abused, it can threaten security. Content circulates the media faster than ever and when generative AI is misused there could be some serious consequences. 

It is a common misconception that AI always provides correct and factual information; however, it has often pulled information from imaginary sources or provided incorrect content. A study was conducted to investigate the frequency of AI hallucinations in content generated by ChatGPT. The results show that of the 178 references cited, 69 of them do not have a DOI, and 28 do not exist (Athaluri et al., 2023 cited in Emsley, 2023). Another study on the accuracy of references in ChatGPT found that of 115 references generated, 47% are made up, 46% are authentic but inaccurate, and only 7% are accurate and genuine (Bhattacharyya et al., 2023 cited in Emsley, 2023). Social robots are developed using this same AI and, following logic, they are bound to provide incorrect information at times. However, since people have a sense of trust in the robots, this might only intensify the spread of misinformation.

Biases

It is human nature to form biases daily, and since humans provide the initial programming of AI, those biases are passed on to those systems and social robots; biased input might create biased output. Information passed from humans to AI during initial programming and coding includes veiled racism and discrimination (Penny, 2017 cited in Varsha, 2023). Not only that, but according to M. J. Sandel, AI gives biases a sense of scientific credibility and these predictions and judgements appear to have an “objective status” (Gündoğar and Niauronis 2023, p.3). 

Superintelligence

The threat of superintelligence is more speculative. Superintelligence is defined as “the level at which AI will supersede humans in all aspects,” meaning that it is capable of “overtaking the intelligence of the entirety of human civilisation” (Batin et al. 2017, p.3). While, by definition, superintelligent systems seem to promise significant advancements for the field of social robots, they are also expected to pose some ethical challenges. The purpose of building social robots, which are also aimed to be superintelligent entities, is for these robots to behave in ways that benefit humans. However, there is a fear that these systems could potentially develop goals that are misaligned with human values. This misalignment could lead to unintended consequences, such as AI pursuing objectives that are harmful to humanity. This fear creates one of the main concerns about superintelligent entities: control.

However, as stated, this concern is based on fear, and fears can be irrational. Mullins (2022) acknowledges that the concept of superintelligence might create considerable anxiety for humans, but he argues that this anxiety is mainly based on “superstitions”. He defends that the development of superintelligent entities could be managed with appropriate oversight and governance frameworks. Mullins (2022, p.3) makes an effort to convey his point about how “our focus should be on how we can rise above this fear and build a world where we can interoperate with trust.” The fear of superintelligent social robots seizing control is not unwarranted; however, such fears might create hindrances in future HRI. It is important to overcome these fears and embrace developments in superintelligent AI. Only then can humans profit from what superintelligent entities have to offer. 

Understanding these potential challenges is imperative in developing long-term plans for human-robot interaction (HRI). By addressing these issues, we can ensure a beneficial integration of robots into society, harnessing their potential while mitigating risks.

4. The Future of Social Robots

The future of social robots promises to change how humans interact with technology. As advancements in AI, machine learning (ML), and robotics continue to progress rapidly, the capabilities of social robots might expand, making them more integrated into everyday life.

One of the key areas of advancement is natural language processing (NLP). According to Holdsworth (2024, para.1), “NLP is a subfield of computer science and AI that uses machine learning to enable computers to understand and communicate with human language.” Kilicaslan and Tuna (2013) argue that effective HRA can be accomplished through communication using natural language. As NLP algorithms continue to improve, social robots might be able to understand and respond to human speech with greater accuracy and nuance, thus providing more efficient HRI in the future.

Another significant technological development is in the field of computer vision. IBM (n.d., para.1) defines computer vision as “a field of AI that uses machine learning and neural networks to teach computers and systems to derive meaningful information from digital images, videos, and other visual inputs.” One of the benefits of this advancement is enabling social robots to better recognise and interpret human emotion through facial expressions and body language. Robinson, Nicole et al. (2023) suggest that this allows for​​ more empathetic and context-aware responses from social robots.

Innovations in AI, ML, NLP, and computer vision drive the significant advancements that are to happen in the future for social robots. These developments are foreseen to enhance the robots’ capabilities, making them more interactive, empathetic, and integrated into human life.

Rights of Social Robots

Something else important to note is that, in general, robots are considered dead, inanimate objects, but AI is typically classified as “alive” (Mcnally and Inayatullah 1984). This begs the question: do robots deserve rights? Are these machines entitled to inalienable rights in the same way as any human? If human life is so precious, do androids deserve the same level of protection if they are capable of replicating life well enough (Borotschnig, 2024)?

This would likely depend on the robot in question; a modern surgery robot having an error and killing a patient would not be formally charged with murder. In the alternate scenario, however, fascinating points could be brought up in debate. If an android with near-perfect human emotion replication were attacked by a human, it could possibly retaliate in self-defence. If a human were to be committing a crime on an android, would the robot be protected by the law? Some hypothesise that in the beginning, androids could likely be classified as “dangerous animals” in law, stating that they have potential for harm and are uncontrollable (Mcnally and Inayatullah 1984). In the long run, androids might inevitably become perfectly “alive”; they might be able to replicate emotional compatibility with humans, eventually making it nearly impossible to tell the difference, which would blur the very identity of humanity and life itself. When this does happen, androids might likely need to be protected by the law while simultaneously being able to be held accountable.

Another important question to bring up revolves around the very nature of emotion itself. Emotions are a chemical reaction in animals that causes internal bodily changes such as temperature and hormonal shifts, while also impairing judgement (Clore 2011). Truly “replicating” emotions in an android would be an interesting discussion in and of itself – how would you get AI to succumb to emotions and make irrational decisions? Social robots in general are about being compatible with humans, while still being fully functional and rational. AI can become perfect at simulating emotions to align with a human, but can they truly feel it?

In summary, social robots would likely have many challenges obtaining equal protection under the law in the short term, as the technology is still imperfect – most robots are still not fully emotionally compatible with humans (De Graaf et al, 2022). The debate for social robots’ rights might arise when there are massive breakthroughs with AI that seemingly replicate human emotions perfectly, but the notion may ultimately be defeated if it is deemed impossible for artificial intelligence to fully experience true emotion.

5. Conclusion

This research paper aims to exemplify how social robots have an increased prevalence in society in various ways: the evolution of technology, the specific advancements in social robotics, the design principles, the impacts and challenges included, and the future of robotics. It is critically important to consider these factors as the field of social robotics will only evolve as time passes.

Robots have evolved wildly throughout history, having a direct influence on many features of many societies, whether in Ancient Greece, the twelfth century, the 16th century, the 20th century, or now (Iavezzo et al., 2022). As human culture has evolved throughout history, the field of robotics has as well. Social robots are exceptionally new however; with the aid of AI, the field of social robotics has been introduced into society, and robots have been programmed with the ability to communicate with humans on a level never seen before (Iavezzo et al., 2022).

Since social robots vary so intensely in their nature, the term “social robot” is fairly vague; it is implied that all social robots are required to be able to communicate with humans with proper social norms and emotional compatibility/understanding (Bartneck et al., 2005). To do this, they are required to be able to understand human emotions when they see them, they should have an appearance to appear like an autonomous entity to a human, and they must be able to detect and adapt to niche social norms in the context of different conversations (Bartneck et al., 2005).

Proper design principles are crucial for the acceptance and effectiveness of social robots. Addressing ethical and legal concerns, like data privacy and replacement fears, through strategies such as data minimisation and human-in-the-loop design, can alleviate these concerns (Frosch-Villaronga et al., 2020). The physical structure and communication methods of social robots influence user engagement. Effective nonverbal communication is also essential for meaningful interactions (Onyeulo & Gandhi, 2020). Public perception varies based on the robot’s design and function. Human-like robots are preferred for social roles, while machine-like robots suit conventional tasks. Matching the robot’s design to its job enhances trust and interaction (Onyeulo & Gandhi, 2020). The incorporation of these principles helps social robots meet user expectations, address concerns, and fulfil their roles effectively, promoting broader acceptance.

Social robots have an impact in different sectors. For example, in education, social robots can enhance learning by fostering engagement and motivation, providing interaction and personalised experiences, and supporting social learning through collaborative activities (Anwar et al., 2019). In healthcare, social robots are used in therapy and elderly care, performing tasks like cognitive tests, medication sorting, and providing companionship through animal therapy (Seifert et al., 2022; Wilson et al., 2016). The advancement of social robotics raises questions about the nature of HRI, emotionality, and the definition of life. The debate focuses on whether robots can be considered living beings and the implications for social dynamics and individual rights (Seifert et al.,2022).

The challenges posed by social robots highlight concerns that must be addressed for their successful integration into society. Issues such as worker alienation and the ethical implications of AI, such as the creation and spread of deep fakes, are current challenges that require attention. However, the speculative nature of superintelligent systems demands careful consideration for future HRI. By navigating these challenges thoughtfully, social robots might have a better chance of enhancing their human-like capabilities while also mitigating potential risks.

Looking ahead, the future of social robots holds transformative potential shaped by ongoing advancements in AI and robotics technologies. Innovations in NLP and computer vision are paving the way for more intuitive and empathetic HRI. As these technologies evolve, questions about the rights and ethical treatment of social robots might inevitably arise. The debate over whether robots deserve legal protections akin to humans challenges us to rethink traditional notions of autonomy and sentience. Ultimately, embracing these advancements responsibly might be pivotal in shaping a future where social robots coexist harmoniously with humanity.

Bibliography

Anwar, S., Bascou, N.A., Menekse, M. and Kardgar, A., 2019. A systematic review of studies on educational robotics. Journal of Pre-College Engineering Education Research (J-PEER), 9(2), p.2. Available at: https://docs.lib.purdue.edu/jpeer/vol9/iss2/2/.

Barchard, K.A., Lapping-Carr, L., Westfall, R.S., Fink-Armold, A., Banisetty, S.B. and Feil-Seifer, D., 2020. Measuring the perceived social intelligence of robots. ACM Transactions on Human-Robot Interaction (THRI), 9(4), pp.1-29. Available at: https://doi.org/10.1145/3415139.

Bartneck, C., Nomura, T., Kanda, T., Suzuki, T. and Kennsuke, K., 2005. A cross-cultural study on attitudes towards robots. Available at: https://www.researchgate.net/publication/200508200_A_cross-cultural_study_on_attitudes_towards_robots.

Batin, M., Turchin, A., Sergey, M., Zhila, A. and Denkenberger, D. (2017). Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence. Informatica, [online] 41(4). Available at: https://informatica.si/index.php/informatica/article/view/1797/1104. ‌

Borotschnig, Hermann. (2024). Emotions in Artificial Intelligence. Available at: https://www.researchgate.net/publication/374531923_Emotions_in_Artificial_Intelligence/citation/download.

Clore, G.L. (2011). Psychology and the Rationality of Emotion. Modern Theology, 27(2), pp.325–338. Available at: https://doi.org/10.1111/j.1468-0025.2010.01679.x.

Chijindu, Vincent & Inyiama, H.. (2012). Social implications of robots – An overview. International Journal of the Physical Sciences. 7. 10.5897/IJPS11.1355.

De Graaf, M.M.A., Hindriks, F.A. and Hindriks, K.V. (2022). Who Wants to Grant Robots Rights? Frontiers in Robotics and AI, 8. Available at: https://doi.org/10.3389/frobt.2021.781985.

Default. (n.d.). GPT-4 Upgrade Improves Results, Expands Application Potential. [online] Available at: https://www.rgare.com/knowledge-center/article/gpt-4-upgrade-improves-results-expands-application-potential [Accessed 28 Jul. 2024]. ‌

Dr. Varsha PS, How can we manage biases in artificial intelligence systems – A systematic literature review, International Journal of Information Management Data Insights, Volume 3, Issue 1, 2023, 100165, ISSN 2667-0968, https://doi.org/10.1016/j.jjimei.2023.100165.

Dworkin, G., 2015. The nature of autonomy. Nordic Journal of Studies in Educational Policy, 2015(2), p.28479. Available at: https://doi.org/10.3402/nstep.v1.28479.

Emsley, R. (2023). ChatGPT: these are not hallucinations – they’re fabrications and falsifications. Schizophrenia, 9(1). Available at: https://doi.org/10.1038/s41537-023-00379-4.

Fosch-Villaronga, E., Lutz, C. and Tamò-Larrieux, A., 2020. Gathering expert opinions for social robots’ ethical, legal, and societal concerns: Findings from four international workshops. International Journal of Social Robotics, 12(2), pp.441-458. Available at: https://doi.org/10.1007/s12369-019-00605-z.

Freeth , T., 2022. An Ancient Greek Astronomical Calculation Machine Reveals New Secrets. Available at: https://www.scientificamerican.com/article/an-ancient-greek-astronomical-calculation-machine-reveals-new-secrets/ [Accessed on 25 July 2024].

Fuchs, T., 2024. Understanding Sophia? On human interaction with artificial agents. Phenomenology and the Cognitive Sciences, 23(1), pp.21-42. Available at: https://doi.org/10.1007/s11097-022-09848-0.

Gündoğar, A. and Niauronis, S. (2023). An Overview of Potential Risks of Artificial General Intelligence Robots. Applied Scientific Research, [online] 2(1), pp.26–40. Available at: https://doi.org/10.56131/tmt.2023.2.1.93.

Hockstein, N.G., Gourin, C.G., Faust, R.A. and Terris, D.J. (2007). A History of robots: from Science Fiction to Surgical Robotics. Journal of Robotic Surgery, 1(2), pp.113–118. Available at: https://doi.org/10.1007/s11701-007-0021-2.

Holdsworth, J., 2024. What is NLP?. Available at: https://www.ibm.com/topics/natural-language-processing#:~:text=Natural%20language%20processing%20(NLP)%20is,and%20communicate%20with%20human%20language [Accessed on 16 July].

Iavazzo, C., Gkegke, X.-E.D., Iavazzo, P.-E. and Gkegkes, I.D. (2022). Evolution of Robots Throughout History from Hephaestus TO Da Vinci Robot. Acta Medico-Historica Adriatica, 12(2). Available at: https://hrcak.srce.hr/ojs/index.php/amha/issue/view/890.

International Business Machines Corporation (IBM). n.d. What is computer vision?. Available at: https://www.ibm.com/topics/computer-vision#:~:text=Computer%20vision%20is%20a%20field,they%20see%20defects%20or%20issues [Accessed on 15 July].

Krämer, N.C. & Bente, G., 2010. Personalizing e-Learning: The social effects of pedagogical agents. Educational Psychology Review, 22(1), pp. 71–87.

Lee, H.R., Sabanovic, S. and Stolterman, E. (2016). How Humanlike Should a Social Robot Be: A User-Centered Exploration. National Conference on Artificial Intelligence. Available at: https://cdn.aaai.org/ocs/12751/12751-56124-1-PB.pdf.

Lu, Z., Li, P., Wang, W. and Yin, M. (2022). The Effects of AI-based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), pp.1–27. Available at: https://doi.org/10.1145/3555562.

Martini, M.C., Murtza, R. and Wiese, E., 2015, September. Minimal physical features required for social robots. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 59, No. 1, pp. 1438-1442). Sage CA: Los Angeles, CA: SAGE Publications. Available at: https://doi.org/10.1177/1541931215591312.

Mcnally, P., and Inayatullah, S., (1984), The Rights of Robots. Available at: https://www.metafuture.org/Articles/TheRightsofRobots.htm.

Moreno, F.R. (2024). Generative AI and Deepfakes: A Human Rights Approach to Tackling Harmful Content. International review of law computers & technology, pp.1–30. Available at: https://doi.org/10.1080/13600869.2024.2324540.

Mullins, B., 2022. AI, Super Intelligence, and the Fear of Machines In Control. The Cyber Defense Review, 7(2), pp.67-76.

Naneva, S., Sarda Gou, M., Webb, T.L. and Prescott, T.J. (2020). A Systematic Review of Attitudes, Anxiety, Acceptance, and Trust Towards Social Robots. International Journal of Social Robotics, 12(6), pp.1179–1201. Available at: https://doi.org/10.1007/s12369-020-00659-4.

Obaigbena, A., Lottu, O.A., Ugwuanyi, E.D., Jacks, B.S., Sodiya, E.O. and Daraojimba, O.D. (2024). AI and human-robot interaction: a Review of Recent Advances and Challenges. GSC Advanced Research and Reviews, [online] 18(2), pp.321–330. Available at: https://doi.org/10.30574/gscarr.2024.18.2.0070.

Onyeulo, E.B. and Gandhi, V. (2020). What Makes a Social Robot Good at Interacting with Humans? Information, 11(1), p.43. Available at: https://doi.org/10.3390/info11010043.

Peña, D. and Tanaka, F., 2020. Human perception of social robot’s emotional states via facial and thermal expressions. ACM Transactions on Human-Robot Interaction (THRI), 9(4), pp.1-19. Available at: https://doi.org/10.1145/3388469.

Seifert, J., Friedrich, O. & Schleidgen, S., 2022. Imitating the Human: New Human–Machine Interactions in Social Robots. NanoEthics, 16(2), pp. 181–192. Available at: 10.1007/s11569-022-00418-x.

Velasco, S. (2016). The 800-year-old Cutaway Graphics of Ismail Al-Jazari. [online] 5W Blog. Available at: https://5wgraphicsblog.com/2016/11/15/the-800-year-old-cutaway-graphics-of-ismail-al-jazari/ [Accessed 28 Jul. 2024]. ‌

Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Metzler, D., Chi, E.H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J. and Fedus, W. (2022). Emergent Abilities of Large Language Models. arXiv. Available at: https://doi.org/10.48550/arXiv.2206.07682.

Wilson, J., Tickle-Degnen, L. & Scheutz, M., 2016. Designing a Social Robot to Assist in Medication Sorting. In Proceedings of the International Conference on Social Robotics, pp. 211–221. Springer. Available at: https://link.springer.com/chapter/10.1007/978-3-319-47437-3_21.