Editor's Note: On March 23 and 24, Regent College will host the 2026 Laing Lectures. We are thrilled to welcome Dr. Meghan Sullivan, the Wilsey Family College Professor of Philosophy at the University of Notre Dame, to speak on "Faith-Based Ethics for a World of Powerful AI." (You may have already heard Dr. Sullivan on the Regent College Podcast.)
As we looked forward to next week’s events, the Vine team invited Regent College students, staff, and faculty to reply to three questions about their attitudes to, and engagement with, powerful AI. In Thinking About AI, Part 1, one staff member shared an extended response to these questions. In this post, we examine responses from 22 other community members. We are grateful to everyone who responded to share their perspectives, all of which helped shape the contents of this article.
“What do you think about AI?” It’s not an easy question to define, much less to answer.
The challenges start at the most basic level: what exactly do we mean by “artificial intelligence,” “powerful AI,” or more specific terms like “generative AI”1 or “agentic AI”?2 Even if we could settle on clear technical definitions, they might not last for long as the technology continues to race ahead. Furthermore, questions about AI are fraught with personal, emotional, and even spiritual significance. They’re the kind of questions that lead to intense feelings and strong, if often conflicted, opinions.
The following discussion draws on responses from 22 members of the Regent community (10 students, 7 staff members, and 5 faculty) who kindly shared their thoughts on a series of questions about AI between February 17 and March 13, 2026. The survey was informal, and the results are by no means scientific. They are, however, illuminating. On the one hand, the responses showcase a wide range of assumptions and perspectives about what AI is, what it means, and what using it says (or would say) about its users. On the other hand, they reveal a broadly shared commitment to intellectually and morally serious engagement with the issues AI presents. The respondents may not agree about how to think or feel about AI, but none of them are taking it lightly.
Survey Question #1: Do you use generative AI in your daily life? If so, how and why? If not, why not?
To say we got a variety of responses to this question would be an understatement. A few respondents simply said they use generative AI for all kinds of things. A few said they don’t use it at all. Most, however, offered something in between.
- Several respondents described using AI to quickly obtain either specific factual information (such as birth and death dates for a historical figure or compositions by a certain author) or general background information on a subject.
- A few staff and faculty members mentioned using AI in specific job-related ways, ranging from locating research sources, to carrying out IT system maintenance, to getting guidance on mechanical issues related to lighting or temperature controls.
- Among a small handful of respondents who mentioned using AI for specific personal or domestic tasks (including cooking and workout planning), two reported using AI to get health-related information and advice, noting that AI offered an approach or experience they couldn’t get from human practitioners.3
Several respondents described rejecting all or most uses of generative AI. Their reasons for doing so ranged from ecological and philosophical concerns to doubts about AI’s accuracy or usefulness. These respondents tended to explain why they don’t use AI in terms of what it would mean if they did use it. To them, using AI in their everyday life might look like outsourcing their thinking to a machine, avoiding challenge and potential failure, choosing the easy way over the best way, rejecting human creativity, depersonalizing their communication, or committing betrayal by presenting others’ words as their own.
The sheer ubiquity of AI added complexity to some respondents’ accounts of their usage. Two respondents pointed out that the use of AI tools by businesses and other online and offline entities makes the use of generative AI effectively involuntary.4 Several others noted that AI summaries at the top of Google searches put AI-generated content in front of them whether they ask for it or not. (Google's support documentation notes that AI Overviews “cannot be turned off.”5)
The responses to our first question made it clear that many survey respondents had strong, though often mixed, feelings about their personal use or non-use of AI. Our second question suggested some sources of this intensity.
Survey Question #2: What hopes and/or fears do you have about the future of AI?
This question, again, provoked a wide range of heartfelt responses. Perhaps the most noticeable trend among them was that participants described significantly more fears than hopes. Indeed, while all but one of the respondents mentioned fears, only about half mentioned hopes.6
Given that we surveyed members of an academic community, it is perhaps not surprising that fears about intellectual atrophy—including diminished skills in critical thinking, reasoning, and judgment—loomed large in many responses. (More than one respondent specifically worried that AI will “make us dumber.”) A few responses hinted at concerns that AI-generated misinformation will make critical thinking and discernment both more urgent and more difficult. Several respondents also referred to AI's detrimental effects on our ability or willingness to do the hard work of learning.
Assistant Professor of the History of Christianity Prabo Mihindukulasuriya’s reflection on how AI might affect the discipline of history illustrates several points aspects of respondents' fear of intellectual decline.
I hope that AI will be able to search unpublished handwritten documents in online archives directly from digitized images even without prior transcription. And this in multiple languages with automatic translation tools. But only as a preliminary search tool for scholars to know what is out there, not as a substitute for hands-on archival research. There's nothing to beat the excitement and romance of sifting through physical documents centuries old!
I fear more confusion about historical data due to the misrepresentation of sources and even the alteration or fabrication of sources.
I fear scholars and students will lose the critical-constructive intellectual imagination and skills that make historical scholarship truly exciting.
Another theme that emerged in the survey results was concern about how AI is impacting human creative expression in everything from fine arts to personal communication. Several respondents noting the growing prevalence of AI-generated content with dismay. Looking forward along current trend lines, Alumni and Church Relations Officer Daniel Foster Fabiano wondered, “Will we be able to distinguish between AI-created and real images and stories in the future?”
Multiple respondents also cited fears around AI’s impact on human relationships. As MATS student Isaac Downie put it, “Sociality is an important part of being human; if we rely on AI as a conversation partner, a sounding board, or a research partner, what does it mean for the people who would normally fit those roles? How does it change us in replacing those human roles with AI?” A few respondents pointed with particular concern to the growing practice of treating AI chatbots as friends or romantic partners, or to troubling reports about how people experiencing mental health challenges have been harmed by interactions with chatbots.
A handful of respondents shared fears that relate to all of these themes but go even deeper. For some, the use or misuse of AI threatens our understanding and experience of human nature itself. Professional-in-Residence Emily Lange, a human geography and international relations expert working in international development, put it succinctly: “The fear is that AI replaces human activity unnecessarily, that it contributes to unlearning processes of critical skills—ultimately, that it undermines what it means to be human.”
Respondents differed in where they located their fears about AI. Some described AI as inherently threatening to humane values, while others focused more on the agency of human users. MATS student Mimi Yap articulated the latter view: “AI, like money, is neutral, until it is used in something nefarious. It is being used in creative ways but also in abusive ways; it is utilised for the good and also for evil.”
J.I. Packer Professor of Theology Jens Zimmermann drew a careful distinction between fears of what AI might do to humans and fears of what humans might use AI to do to themselves.
I am with those (Erik Larson,7 Nils Nilsson,8 Meredith Broussard,9 Hubert Dreyfus,10 and Raymond Tallis, to name only a few scholars/industry experts11) who deny that AI currently is or ever will be intelligent, that is, possess any kind of intellect. So I have no apocalyptic fears of supercomputers who take over the planet as a new super-intelligent species. However, the unthinking employment of so-called "AI" can still be very dangerous. In education, it can lead to the atrophying of the human spirit, and prevent the formation of critical, thinking minds. The current push to use AI in juridical and management settings to replace human judgment is also bad enough. The most immediate existential threat, however, comes from the increasing use of AI in military settings, not least in strategic warfare like the current use of AI for target selection in bombing raids. Taking the human factor out of the decision making process in these scenarios is unethical and reprehensible. In short, "AI" can make us dumber, create a more inhumane society, and also dramatically increase our chances of destroying ourselves.
In addition to fears about how AI could affect individuals, some respondents mentioned fears about its impact on society more broadly. Concerns about AI’s environmental impact, effects on employment, and use in disseminating misinformation were cited by multiple respondents, as were concerns about biases embedded in large language models.
What, then, do our respondents hope for from AI? Among specific responses, advances in medicine and healthcare topped the list. Tools to aid academic research and reduce workplace tedium also received multiple mentions.
R. Paul Stevens Associate Professor of Marketplace Theology and Leadership David Robinson offered a relatively optimistic perspective, speculating that the needs of the AI industry might incentivize and accelerate solutions to other problems.
I am generally positive about human innovation and acknowledge the potential of AI in many areas, such as scientific and medical research. AI can lead to significant productivity gains for workers and should spare us some tedious tasks going forward. One of my concerns about AI has to do with the environmental impact of large data centres, which have energy needs that rival major cities and still rely on fossil fuels. I hope that newer AI models can rise to the occasion by speeding up the development of more sustainable energy sources.
With reference to historical precedents, Worship Coordinator Thomas Bergen described hopes for what AI might help us achieve, but fear of what it will lead us to expect.
Hopes for saner, more efficient and coordinated planning of cities, healthcare, etc. Fears of ever-greater demands and expectations of productivity placed on humans. When in human history have we ever been able to do more and consume more through technological advances (and, consequently, come to expect more of ourselves and others) but then collectively say “no, we must scale back”?
As these examples begin to suggest, many respondents who mentioned hopes for the future of AI framed their hopes and fears as two sides of the same coin. Several positioned possible outcomes on a spectrum, arguing that the same qualities of AI that could be helpful if used with judiciousness and restraint could become fearsome if employed irresponsibly or allowed to gain too much control over human life and society. GradDipCS student Silja Lehtinen, for example, has hope for AI as “a useful tool” in “human-led contexts,” but worries about carelessness in its implementation.
I think AI can contribute positively to things like healthcare. For example, if AI can be used to detect diseases, it can speed up diagnosis and care. I would not remove the human element though, or sideline the importance of a human doctor . . . I am apprehensive, though, about AI being adopted carelessly and without considering how it can impact employment and human dignity. It should be a tool for the greater good, not a “master” that ends up controlling lives and dehumanizing work. My worry is that it will cause greater inequalities and lead to a world where the outcome is more important than the lived experience.
IT Director Cam Tucker drew on iconic media representations to make a similar point. “I hope that humanity's next era is closer to Star Trek than to just about every other science fiction future. AI as an enhancement to human life and not its master.”
Survey Question #3: What kind of questions should Christians be asking about AI and its role in society?
Participants offered a wide range of responses to this question, further illustrating the diversity and complexity of views represented in the Regent community. It is worth noting that while this was the only question that directly referenced religion, participants’ spiritual and theological interests were evidenced throughout the survey. Many respondents made repeated references to their desire to think and act Christianly regarding AI.
While all of the questions suggested by our respondents deserve to be taken up by churches and individual Christians, only a few can be listed here. The following selections from a handful of respondents, most of whom have been identified above,12 illustrate some of the themes that emerged from the full set of responses.
Questions About Meaning
- What is AI? What is its true nature and what can and can it not do? —Jens
- What is truth? Is there an essential reality beyond culturally generated information and subjective opinion? How do we know what sources of information to trust? —Prabo
- What are the greater purposes of human work beyond merely instrumental accounts? —David
Questions About Being Human
- What makes us distinctly human and different from machines? —Online Course Administrator and Podcast Producer Rachel Hanna
- How does the resurrection of Jesus Christ eclipse the AI narrative and better inform what a fully alive human being is? —Cam
Questions About AI’s Impact
- Who in any society will benefit from AI as an allocator of public goods and provider of services? Who will lose? —Prabo
- How does AI impact our human embodiment and relationships? —Daniel
- How is the use of AI forming us mentally, and how much are we trusting it? —Rachel
- At what point does AI cross over from being a useful tool to controlling our lives and suppressing human experience/dignity? —Silja
Questions About Harm Mitigation
- Given that “creative destruction” is a reality in our economic system, how can we facilitate this process in the most humane manner for workers? —David
- In an ever more efficient world, how do we ensure that people are included and valued for who they are—especially people who may have certain limitations or differences? —Silja
Questions About Christians’ Role in AI Development
- What is our role in challenging and influencing AI and/or politics (in a context where AI is here, whether we like it or not)? —Silja
- How can we contribute to the “formation” of AI by drawing on the legacy of Christian virtues? —David
It seems fitting to end this review with a list of questions. The responses to our informal survey about AI engagement make it clear that members of the Regent community, like thoughtful people in communities around the world, are struggling to make sense of and respond to the rise of powerful AI. For many of us, our conversations on this topic are just beginning—and questions are a great place to start. We hope you’ll join us as we continue thinking, talking, and asking questions about AI.