Editor's Note: On March 23 and 24, Regent College will host the 2026 Laing Lectures. We are thrilled to welcome Dr. Meghan Sullivan, the Wilsey Family College Professor of Philosophy at the University of Notre Dame, to speak on "Faith-Based Ethics for a World of Powerful AI." (You may have already heard Dr. Sullivan on the Regent College Podcast.)

In anticipation of next week’s events, the Vine team invited Regent College students, staff, and faculty to reply to three questions about their engagement with powerful AI. Below, you’ll find one staff member's extended reflection on these questions. Tomorrow, we’ll share a range of responses from throughout the Regent community.


What do you think about AI? It’s a question everyone seems to be asking—in public and in private, in shouts and whispers, all day everyday. The following is my attempt to provide a partial answer by responding to three survey questions. In doing so, I’m painfully aware of the gaps in my thinking. I worry about seeming naive, or ill-informed, or flippant.1 So, please, do me a favour: read the following as a case study, not an exemplar. Above all, I’ve tried to focus on process: not just what I think, but how I try to think about AI. As you’ll see, that process involves asking a lot more questions. I hope this piece will prompt you to ask some of your own.

Survey Question #1: Do you use generative AI in your daily life? If so, how and why? If not, why not?

For a long time after generative AI burst into the mainstream, I was determined to keep my usage somewhere in the range of non-existent to bare minimum. Frankly, I found the whole subject of AI daunting, bordering on terrifying. I avoided it, I downplayed it to myself, I generally looked askance—but I couldn’t quite look away. 

My perspective started to shift as I began seeking out, rather than avoiding, credible reporting and analysis on AI-related issues. This is often more unnerving than inspiring, to be sure, but becoming more familiar with current news and discussions made me more comfortable thinking through issues rather than instinctively recoiling from them. I began experimenting ever so slightly, turning to chatbots with questions I couldn’t think of any other way to answer. My thinking really started to change after talking about AI with a few people I know as rigorous and faithful theological thinkers. I found that they were using it more than I expected, but in ways I could respect.

I have many and growing concerns about powerful AI, its role in society, and its environmental impact. But I’ve come to the conclusion that, for me personally, trying to avoid AI at all costs would be less responsible than gaining a level of familiarity with these technologies that could help me make informed choices and judgments in an increasingly AI-oriented future. 

My use of AI is still quite sparing, and my thinking in this area has a long way to go. At this point, I've come to be more open to generative AI as a potentially useful tool, but one that requires an incredible amount of thoughtfulness and discernment. As I try to be discerning about possible uses for AI in my own life, I’ve gravitated to a few questions. First, is AI the best tool for the job? That is, is there something about AI that makes it more qualified than a person to answer a question, or more relevant than other resources to a particular task?2 

First, is AI the best tool for the job?

In addressing this question, I think a lot about large language models (LLMs), the technology underlying AI chatbots. Writing for IBM, Cole Stryker defined LLMs as “a category of deep learning models trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content.” Crucially, they “work as giant statistical prediction machines that repeatedly predict the next word in a sequence. They learn patterns in their text and generate language that follows those patterns.”3

As far as I understand it, this suggests that an AI chatbot’s answer to a question is something approximating the “average” “opinion” of “the internet” on the topic it’s given. Now, those quotation marks are doing a lot of work. (Specifically, they’re trying to get me off the hook for explaining what any of those words actually mean in this context.) But, basically, I see LLMs as offering something like the results of a large-scale survey, as if it conducts a poll and series of focus groups of everyone who’s written about a subject and then presents a set of summary conclusions. 

I’m certain this chatbot-as-pollster analogy is flawed in many ways, but it gives me a framework for thinking about the nature of LLMs, their unique capabilities as “statistical prediction machines,” and their strengths and weaknesses. For example, a strength is that they can give you a big-picture view of a subject very quickly by finding, analyzing, and recombining types and quantities of data far beyond the scope of a person working alone. A weakness is that summarizing and re-packaging a huge number of people’s ideas strips out relevant information about context, expertise, motivation, and much more. 

Let me try to illustrate my point by turning to my favourite use for generative AI: helping me cook with whatever food I have on hand.4 It turns out that AI is very good at turning random lists of ingredients into plausible meal options. Why? I think it’s because this use case plays precisely to the strengths of LLMs. Given a prompt to generate a recipe using specific ingredients, it responds with something like the statistical average of online food writers’ advice on how to use, say, chickpeas and sun-dried tomatoes. This works especially well for me because when I go through this process I’m not looking for something special or personalized, I’m looking for something easy and hard to mess up. The most statistically plausible option suits me just fine.5

So, that addresses the question of whether an AI tool is uniquely suited to what I want to do. I think it is. But is this thing that AI is so good at actually a good thing to do? There is far more to say about ethics and AI, but I’ll stick to my example. I started using AI to generate recipes with a goal of reducing food waste, and by that measure it has succeeded beyond my expectations. Not only has it cut down on the amount of food I buy but don’t use, it gave me the confidence to start using a service that sells fresh produce that has been rejected by stores and would otherwise go to waste. The selections are unpredictable, but now I know I can wing it. In all these respects, I feel like this particular use of AI is helping me live out my values around environmental ethics in a new way.

I’ve presented an optimistic case study here. Not coincidentally, it’s also very low-stakes. If I don’t like a recipe a chatbot churns out, the consequences are close to non-existent. Of course, that’s not always the case. 

The more consequential the use case, the more urgent the need for discernment becomes. So, we return to discernment. I feel a heavy responsibility to evaluate anything AI tells me, always bearing in mind that (a) there's a lot of nonsense (and worse) on the internet just waiting to be scooped up by LLMs, and (b) chatbot responses are powerfully shaped by training models, instruction tuning, and other processes that reflect the biases and interests of those who stand to profit from these technologies. The more consequential the use case, the more urgent the need for discernment becomes.

Survey Question #2: What hopes and/or fears do you have about the future of AI?

My concerns in this area are many and diverse. They include:

  • The energy requirements and environmental impact of AI data centres.
  • The impact on workers as employers are incentivized to automate more and more roles.
  • Growing opportunities for deception, social and political manipulation, and, more generally, the erosion of a shared public understanding of reality.
  • The loss of desire and ability to communicate and form connections with other humans rather than sycophantic chatbots.
  • Increasing concentrations of wealth and power in the hands of a few unaccountable people and corporations.
  • The use of AI for violent and destructive ends (including by militaries).

I could continue, but that’s probably enough to be getting on with.

My greatest hope is that somehow, contrary to most historical precedent, AI will be turned first and foremost to addressing the problems of people experiencing extreme poverty and other dire conditions. For example, could AI assist in the development of new crop varieties or agricultural strategies for landscapes undergoing desertification due to climate change? Could it help develop tools that improve access to drinking water, expand microfinance opportunities, or reduce maternal and infant mortality in under-resourced regions and communities? It would be both just and fitting if AI systems developed by the rich and powerful were used to disproportionately benefit those who have been bypassed or victimized by the global economy.

Survey Question #3: What kind of questions should Christians be asking about AI and its role in society?

All the questions! I’m tempted to leave it at that, because many people are asking many excellent questions already. But here are a few of the more idiosyncratic questions I’ve been thinking about.

  • How can we encourage and facilitate deep human connection, including for people whose physical, geographical, psychological, or other circumstances make in-person interaction difficult? To what extent are we willing to be flexible and creative as we offer alternatives to AI companionship, especially to people who are easily sidelined by traditional social activities?
  • What, if any, are our ethical obligations to AI agents—both in the present, and in future scenarios where questions about artificial consciousness are increasingly pressing? Do we need to be good neighbours to AI entities? How might these questions relate to our thinking about the ethical treatment of people, animals, and creation as a whole?


The rise of generative and agentic AI is one of the most pressing issues we face as humans, yet also one of the hardest to grasp. Even the experts pushing this technology forward admit that they only partially understand what they’re unleashing. How can society grapple with something that’s constantly ten steps ahead of us, racing to an unknown destination? I don’t know the answer to that. But I’m pretty sure silence and isolation won’t help. As humans, and especially as Christians, we need to keep asking, “what do you think about AI?”

So . . . what do you think about AI?