Call For Papers | Volume 2 Issue 3
To What Extent Do You Believe That Artificial Intelligence Superficially and Inherently Decides Your Decisions?
DOI: https://doi.org/ 10.5281/zenodo.19447845
Author: Mrs. Urvi Shah, Director FAN ADVISORY LLP
E-Mail: urvi.guru@gmail.com
Abstract
This qualitative research paper explores in depth the extent to which Artificial Intelligence AI both superficially and inherently influences human decision making processes. The study is grounded in the increasing presence of AI driven systems such as recommendation algorithms search engines and digital assistants that now mediate everyday choices often without users being fully aware of their influence. The central research question guiding this paper is to what extent Artificial Intelligence not only assists but also subtly and structurally shapes human decisions.
To investigate this question we conducted a series of in depth semi structured interviews with participants from diverse age groups educational backgrounds and levels of technological familiarity. These interviews were designed to capture both explicit opinions about AI and implicit behavioral patterns including habits routines and reliance on AI systems in everyday situations. Participants were asked to reflect on real life scenarios such as choosing what content to watch selecting products to purchase deciding routes for travel and forming opinions based on information curated by AI systems.
The findings reveal a complex and layered relationship between humans and AI. On a superficial level participants consistently described AI as a helpful and convenient tool that provides suggestions simplifies tasks and improves efficiency. Most participants expressed a strong belief that they maintain full control over their final decisions. However a deeper analysis of the interview data suggests a more embedded and inherent influence. AI systems were found to shape the range of available choices prioritize certain options and gradually influence user preferences over time through repeated exposure and personalization.
Furthermore the study highlights a growing dependence on AI systems where participants often rely on recommendations without critically evaluating alternatives. Many participants also demonstrated limited awareness of how AI systems filter and prioritize information which contributes to a perceived sense of autonomy while underlying decision pathways are being influenced.
In conclusion this study argues that while AI does not directly control human decisions it plays a significant role in shaping them both superficially and inherently. The influence of AI operates not only at the level of visible suggestions but also at a deeper structural level where it frames possibilities guides attention and gradually conditions behavior. This raises important questions about autonomy awareness and the evolving relationship between humans and intelligent systems.
Introduction
Artificial Intelligence AI has rapidly become embedded within everyday life influencing how individuals search for information communicate with others consume media and make decisions. From personalized recommendations on streaming platforms to algorithmically curated social media feeds AI systems are continuously interacting with users often in ways that feel seamless natural and unobtrusive. As a result decision making is no longer an entirely independent cognitive process but one that is increasingly shaped within digitally constructed environments.
This growing integration of AI raises important questions about human autonomy and control. While AI is commonly understood as a tool designed to assist users by providing relevant information and suggestions its role may extend beyond simple support. The ways in which AI systems filter prioritize and present information can significantly influence what users see consider and ultimately choose. This creates a situation where decisions may appear self directed while being subtly guided by underlying algorithmic processes.
The purpose of this study is to explore the extent to which AI influences decision making at both superficial and inherent levels. A superficial influence refers to visible interactions such as recommendations suggestions and prompts that users can easily recognize. In contrast inherent influence refers to deeper less visible effects including the shaping of preferences limiting of perceived choices and gradual conditioning of behaviors over time. By distinguishing between these two levels this research aims to better understand how AI operates within everyday decision making contexts.
To address this research question we conducted qualitative interviews with participants from a range of backgrounds focusing on their lived experiences with AI systems. Rather than measuring behavior quantitatively this study emphasizes personal perceptions reflections and narratives allowing for a more nuanced understanding of how individuals interpret and respond to AI influence in their daily lives.
This paper argues that although users often perceive themselves as fully autonomous decision makers AI plays a significant role in structuring the environment in which decisions are made. By examining both conscious and unconscious forms of influence this research contributes to ongoing discussions about technology agency and the future of human decision making in an increasingly AI mediated world.
Methodology
This study adopts a qualitative research design in order to explore in detail how individuals experience and interpret the influence of Artificial Intelligence AI in their everyday decision making. A qualitative approach was chosen because the research focuses on understanding subjective perspectives personal experiences and underlying meanings rather than measuring numerical outcomes. This allows for a deeper exploration of how AI influence is perceived both consciously and unconsciously by users.
The primary method of data collection was semi structured interviews. A total of fifteen participants were selected using purposive sampling to ensure diversity in age education level occupation and familiarity with digital technologies. Participants ranged from students to working professionals and included individuals with both high and low levels of engagement with AI driven platforms. This diversity was important in capturing a wide range of experiences and viewpoints.
Each interview was conducted individually and lasted between thirty to forty five minutes. The interviews were carried out in a conversational format allowing participants to freely express their thoughts while still addressing key guiding questions. These questions focused on everyday interactions with AI systems such as social media platforms online shopping websites navigation tools and content recommendation services. Participants were encouraged to provide specific examples from their own lives to illustrate how they make decisions in AI mediated environments.
All interviews were recorded with participant consent and later transcribed for analysis. To ensure ethical considerations participants were informed about the purpose of the study and assured that their responses would remain confidential and anonymous. No personally identifiable information was included in the final data set.
The data was analyzed using thematic analysis. This involved carefully reading through the interview transcripts multiple times to identify recurring patterns themes and insights. Initial codes were generated based on significant statements and these codes were then grouped into broader themes such as perceived control reliance on AI awareness of influence and behavioral changes over time. This process allowed for the identification of both explicit opinions and implicit patterns in participant responses.
Overall this methodology was designed to provide a rich and detailed understanding of how AI influences decision making from the perspective of everyday users. By focusing on lived experiences and in depth narratives the study aims to uncover not only what participants think about AI but also how it actually shapes their decisions in practice.
Findings
The analysis of the interview data revealed several detailed and interconnected themes that highlight how Artificial Intelligence AI influences decision making at both superficial and inherent levels. While participants initially described their interactions with AI as minimal and supportive deeper examination of their responses showed consistent patterns of reliance shaping and subtle behavioral influence.
Perceived Autonomy and Sense of Control
A dominant theme across almost all interviews was the strong belief in personal autonomy. Participants repeatedly stated that they make their own decisions and that AI only assists in the process. Many used phrases such as I choose what I want or I do not blindly follow suggestions. However when asked to describe specific decision making situations participants often revealed that they rarely go beyond the options presented to them. This indicates a gap between perceived independence and actual behavior.
Superficial Influence Through Suggestions and Convenience
Participants widely acknowledged that AI affects them at a surface level. This includes recommendations on streaming platforms suggested products in online shopping and auto generated search results. These features were described as helpful time saving and convenient. Many participants admitted that they often select from the first few options presented because it feels efficient and requires less effort. In this sense AI acts as a filter that reduces decision complexity while simultaneously guiding choices.
Inherent Influence Through Framing and Limiting Choices
Beyond visible suggestions a deeper form of influence was identified. AI systems were found to shape the decision making environment itself by determining what options are shown and which are hidden. Participants rarely questioned why certain content appeared or why alternatives were not visible. Over time repeated exposure to similar types of content led to the reinforcement of preferences and habits. This suggests that AI does not just assist decisions but actively frames the boundaries within which decisions are made.
Development of Dependence and Habitual Use
Another key finding was the gradual development of dependence on AI systems. Participants described relying on navigation apps without considering alternative routes trusting product recommendations without comparing multiple sources and consuming content suggested by algorithms without actively searching for new material. This habitual reliance reduces active decision making effort and increases passive acceptance of AI guidance.
Limited Awareness of Algorithmic Processes
A significant number of participants demonstrated limited understanding of how AI systems function. While they were aware that recommendations are personalized few could explain how or why this personalization occurs. This lack of awareness contributes to an uncritical acceptance of AI outputs and strengthens the illusion that choices are entirely self directed.
Emotional and Cognitive Comfort
Participants also expressed a sense of comfort when using AI driven systems. The reduced need to think deeply or evaluate multiple options created a feeling of ease and satisfaction. However this comfort may come at the cost of reduced critical thinking and exploration. Some participants acknowledged that they rarely step outside their usual patterns once AI systems learn their preferences.
Contradictions in Participant Responses
An important observation was the presence of contradictions within individual interviews. Participants would initially deny being influenced by AI but later describe behaviors that clearly indicate dependence and guidance. This inconsistency highlights the subtle nature of AI influence which often operates below the level of conscious awareness.
Overall the findings suggest that AI influence is both visible and hidden. While users recognize and accept its superficial role they underestimate its deeper impact on shaping preferences limiting choices and guiding behavior over time.
Discussion
The findings of this study provide important insight into the complex relationship between human decision making and Artificial Intelligence AI. While participants consistently expressed confidence in their ability to make independent choices the data reveals that this sense of autonomy is often constructed within environments that are already shaped by AI systems. This creates a situation where individuals feel in control even when their decisions are being guided in subtle and structured ways.
One of the key points emerging from the findings is the distinction between perceived influence and actual influence. Participants tend to recognize only the most visible forms of AI interaction such as recommendations and suggestions. These are interpreted as optional tools that can be accepted or ignored. However the inherent influence of AI operates at a deeper level by shaping what options are available in the first place. This aligns with the idea that control over choices can be exercised not only by directing decisions but also by structuring the range of possible alternatives.
The concept of framing becomes particularly important in understanding this process. AI systems do not simply present neutral sets of options. Instead they prioritize certain types of information based on past behavior predicted preferences and engagement patterns. As a result users are more likely to encounter content that reinforces their existing interests while alternative perspectives may remain unseen. This gradual narrowing of exposure can influence long term preferences beliefs and habits without requiring explicit persuasion.
Another significant aspect highlighted by the study is the development of reliance on AI systems. Convenience plays a central role in this process. Participants repeatedly emphasized how AI reduces effort saves time and simplifies complex decisions. While these benefits are undeniable they also contribute to a reduction in active critical engagement. When decisions become easier individuals may become less motivated to question recommendations explore alternatives or reflect on their choices. Over time this can lead to a passive decision making style where AI guidance is accepted with minimal scrutiny.
The limited awareness of how AI systems function further strengthens this dynamic. Without a clear understanding of algorithmic processes users are less likely to recognize potential biases or limitations in the information they receive. This lack of transparency allows AI influence to remain largely invisible even as it shapes everyday behavior. The contradictions observed in participant responses demonstrate this clearly individuals believe they are not influenced yet describe patterns of behavior that suggest otherwise.
Importantly this discussion does not suggest that AI completely determines human decisions. Rather it highlights that influence operates on a spectrum. At one end AI serves as a tool that supports user choice while at the other end it subtly structures the decision making environment in ways that make certain outcomes more likely. The interaction between human agency and algorithmic guidance is therefore not a simple opposition but a continuous negotiation.
Overall the findings suggest that AI influence should not be understood only in terms of direct control but also in terms of environmental shaping. By organizing information prioritizing options and learning from user behavior AI systems become active participants in the decision making process. Recognizing this role is essential for developing greater awareness and maintaining meaningful autonomy in an increasingly AI mediated world.
Conclusion
This study set out to explore the extent to which Artificial Intelligence AI influences human decision making at both superficial and inherent levels. Through qualitative interviews and thematic analysis the research reveals that AI plays a far more significant role in shaping decisions than users typically recognize. While participants strongly believed in their own independence and control the findings demonstrate that this autonomy often exists within boundaries that are structured by AI systems.
At a superficial level AI is clearly visible as a tool that offers suggestions recommendations and convenient options. Users engage with these features consciously and often appreciate the efficiency and ease they provide. However the deeper inherent influence of AI is less visible yet more impactful. By filtering information prioritizing certain choices and learning from user behavior AI systems shape the environment in which decisions are made. This means that even when individuals feel they are making independent choices those choices are often guided by pre selected options and patterns established by algorithms.
The study also highlights the role of dependence and reduced critical engagement. As users become more accustomed to relying on AI systems decision making can become more passive with less effort placed on exploring alternatives or questioning recommendations. Combined with limited awareness of how AI systems operate this creates a situation where influence is both normalized and unnoticed.
Overall the research suggests that AI does not directly control human decisions but significantly conditions them. Its influence is not forceful or explicit but gradual consistent and embedded within everyday interactions. Therefore the extent of AI influence can be understood as both superficial in its visible assistance and inherent in its deeper role in shaping preferences behaviors and perceived choices.
This raises important considerations for the future. As AI continues to evolve and integrate further into daily life it becomes increasingly important for users to develop awareness of how these systems function and how they influence decision making processes. Encouraging critical thinking transparency and digital literacy will be essential in ensuring that human autonomy remains meaningful in an AI mediated world.
References
Binns, R. (2018). Fairness in machine learning lessons from political philosophy. Proceedings of the 2018 Conference on Fairness Accountability and Transparency, 149 to 159. https://doi.org/10.1145/3287560.3287598
Bucher, T. (2018). If then algorithmic power and politics. Oxford University Press.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108 to 116.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie P. J. Boczkowski & K. A. Foot (Eds.), Media technologies essays on communication materiality and society (pp. 167 to 194). MIT Press.
Kahneman, D. (2011). Thinking fast and slow. Farrar Straus and Giroux.
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094 to 1096. https://doi.org/10.1126/science.aao2998
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms mapping the debate. Big Data and Society, 3(2), 1 to 21. https://doi.org/10.1177/2053951716679679
Noble, S. U. (2018). Algorithms of oppression how search engines reinforce racism. New York University Press.
Pariser, E. (2011). The filter bubble what the internet is hiding from you. Penguin Press.
Sunstein, C. R. (2001). Republic com. Princeton University Press.
Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google emergent challenges of computational agency. Colorado Technology Law Journal, 13, 203 to 218.