When the Church Council changed the Sabbatical Policy for the church to an annual sabbatical they asked that we use it to take one week of rest, followed by a week of learning, and then a week of planning. I love learning but have to admit, the idea of planning how to spend an entire week was a little daunting. I simply did not want to waste the opportunity. Over several months I carefully thought about how I might spend the time. Several ideas came to mind: I considered obtaining my Scuba certification or pursuing my FAA Certified Flight Instructor certification. In both cases, however, the costs were high, and the timing was not ideal. Ultimately, I decided to dedicate the week to reading several books I had long wanted to explore.
During my rest week, Emily and I were having lunch, and our conversation turned to nieces and nephews. As we talked, I became increasingly convinced that the younger generation will face complex ethical questions regarding technology—questions that we cannot yet fully imagine. I realized that while I had opinions on some emerging technologies, my understanding was insufficiently informed. This conviction prompted me to shift the focus of my learning week toward a deep dive into the technology and ethics shaping our world today, with a particular emphasis on artificial intelligence (AI). What follows is a summary of my findings and reflections. This document includes a short version of my thoughts, followed by an annotated bibliography of the books I read. My eventual goal is to write a much more detailed account.
A final remark: the technology I discuss is rapidly changing, so inevitably, there will be changes. Also, I am attempting to write with some basic expertise while keeping the content understandable to a general audience. What follows is the result of this effort, and I acknowledge that it may fall short of both goals.
A key challenge emerged as I read books about technology and ethics and in particular looked for people discussing Artificial Intelligence (AI) ethics. While many authors offered powerful theological and philosophical ideas, it often seemed they lacked a strong grasp of the science and technology that makes these AI systems work. Simply put, they were often theologians and ethicists who liked technology but lacked any formal training or experience in the science of technology. This was especially the case with several of the books I initially came across. I was looking for insights built on solid evidence and a strong understanding of the technology, not just ethical theory and so I found myself searching hard to find better sources. As I looked for resources I recognized a need to look beyond general commentary on technology and deeply explore how AI systems are actually built and how that architecture impacts what their limits are.
I want to be clear that I am not an expert in AI development. However, I am particularly well suited as my academic background does provide a helpful foundation, including previous training in the design and evaluation of algorithms and work in dynamical systems. Furthermore, my master's in applied mathematics involved coursework on neural networks—the building blocks for the large AI models we talk about today.
Leveraging this technical history, I decided to structure my learning week to explore not only the prevailing ethical and theological discourse but also the specific technical design and architecture of these systems. My main objective became to combine my strengths in theology and ethics with a deep knowledge of the technology, ensuring that my reflections are informed by both the question of what we should do and the reality of how the technology actually works.
My overall summary after my deep dive is that AI is a powerful tool that we should not be afraid to use but we should also be careful not to abuse.
AI systems provide some incredibly opportunities when used properly and can change the way much work is done. Tasks such as editing bodies of text, uncovering patterns in data, and summarizing documents are well-suited for AI. AI is capable of debugging computer code, evaluating reason, and even generating human-like responses. The AI future holds much promise but there are also many who have raised some serious concerns regarding the AI future and these concerns deserve at least some discussion.
Many discussions of AI concerns are alarmist, citing extreme scenarios such as societal collapse, the rise of the antichrist, or the end of the world. I view these concerns as largely exaggerated—often akin to clickbait intended to provoke fear. Nonetheless, there are immediate and practical concerns that merit attention.
Anyone who has used ChatGPT or a similar LLM will quickly notice that inaccuracies occur. Fundamentally, these systems are designed to predict the “next best word” in a sequence, not to recall facts. While this difference might seem minor it has enormous implications for how the systems work. This is because AI is not really outputting an information block but rather building an idea one word at a time. If you were asked to write a story about Jane’s visit to the apple orchard you would probably consider several elements of the the plot arrange them in your mind and then begin building the story but this is not fundamentally how AI is designed. AI is designed to think one word at a time. For example, consider the sentence:
“Jane went to the apple orchard hoping to climb a tree and pick a _______.”
Most people would naturally choose “apple,” but some might select “peach” to add a twist or humor. AI works similarly: it predicts the word most likely to follow the preceding text based on patterns learned during training. Early choices in a sequence can significantly influence subsequent outputs. If an initial prediction is slightly off, the system may continue along a path that compounds the error, producing outputs that are inaccurate, misleading, or simply unexpected.
Understanding this mechanism is essential to contextualize errors in AI-generated content. While AI can provide impressively coherent and informative text, users must remain aware that it does not “know” truth in the human sense—it is a pattern-matching system. Recognizing this helps users approach outputs critically, verifying important information rather than accepting it at face value.
As followers of Christ, we serve a God of truth. Jesus says it plainly in John 14:6: “I am the way and the truth and the life. No one comes to the Father except through me.” Because truth is central to our faith, we must be careful not to fall for — or pass along — falsehoods. Repeating something untrue doesn’t just weaken our arguments; it undermines our witness to a God who is Himself truth. That reality raises the bar for Christians using tools like ChatGPT: we should welcome their help, but we must check everything they produce against Scripture, sound reasoning, and reliable sources before we share it. The Christian who reposts false information, in my opinion, is just as guilty of deception as the individual who created the false information and if that information was derived from a tool like AI the responsibility is no different.
Bias is inherent in all human communication. Our backgrounds, experiences, desires, and sources of information naturally shape how we interpret and share knowledge. No matter how hard you try, you are going to interpret the world through your past experiences and knowledge and any claim to have eliminated bias is a false claim. People have a natural tendency to assume that a computer, as an purely logical entity enjoys the absence of bias but this is not true as the computer was programmed by a person bearing bias, the data in a computer naturally contains bias, and they way the output of a computer is interpreted will again involve bias. It is unrealistic to expect a computer system to be free of bias. This is true of computers programmed in a traditional way and of computers programmed as a Large Language Model.
Large language models like ChatGPT are trained on massive datasets collected from publicly available text, including sources such as Wikipedia, forums, and news articles. These texts inherently reflect the biases, perspectives, and assumptions of the authors. Consequently, the AI’s initial outputs reflect the biases embedded in the training material.
After the initial training, these models undergo supervised fine-tuning. Human trainers interact with the system and provide feedback. For example, if the system says, “My favorite food is spaghetti,” a trainer may respond, “Spaghetti is messy; it cannot be your favorite,” introducing a subtle form of bias. While some fear that AI will produce biased or manipulative outputs, it is important to recognize that bias exists in all human-generated information. The key responsibility of discerning users is to compare AI outputs with Scripture, empirical evidence, and reliable sources to evaluate truthfulness and appropriateness.
One of the concerns often raised about LLMs is whether they violate copyright rules. While I am not a lawyer, I have thought about this issue and have formed a perspective grounded in both the nature of intellectual property and the way AI systems operate. My belief is that Copyright should protects the expression of ideas, such as the specific wording or creative presentation in a book, article, or other media as well as the steps or process by which something is done. I do not believe that copyright should necessarily protect the underlying truths or facts themselves. Truth exists independently of how it is recorded, so the intellectual property of an author is in the particular way they convey that truth, not in the truth itself. In a math class this is the same as valuing someone’s work more than the answer.
Practically, this distinction matters when considering AI systems like ChatGPT. These models do not store or reproduce exact copies of copyrighted material. Instead, they analyze vast amounts of text to learn patterns in language, structure, and meaning. When an LLM generates text, it is not retrieving words verbatim from a source but rather synthesizing and expressing information in a new form based on its training. This process is more akin to a human recalling and summarizing information from memory than it is to copying a page of a book.
From this perspective, the AI is not committing copyright infringement, because it is producing a novel expression rather than duplicating an existing work. Of course, there are nuances and gray areas—particularly if AI is prompted to reproduce very specific passages—but fundamentally, the system is encoding knowledge and patterns, not storing literal words. In a sense, the AI functions like a human who has learned information and then articulates it in their own words. Therefore, while copyright concerns are understandable and worth monitoring, they should not overshadow the broader ethical and practical considerations of AI use.
While the above is my own personal opinion on the matter, the legal question is not fully settled. Copyright holders who argue that LLMs are violating their rights contend that the initial act of copying millions of publicly available copyrighted works to create the training dataset is, in itself, a violation of their exclusive right of reproduction. Personally, I believe this initial copying should be viewed as a technical step necessary for learning patterns, similar to how a human reads, but I recognize that current law is less clear. Nevertheless, copyright holders contend that this unauthorized use constitutes a form of theft that harms the market for their content. The other argument made is that even if the output of an LLM is different than the copyrighted work, the output is a substitute for the work and as such harms the market for the original work. Therefore, the current legal landscape is divided into two major questions: (1) Is the use of copyrighted works for training protected by Fair Use because it is transformative? and (2) Can the output of the LLM be considered infringing if it is substantially similar to an existing work or serves as a direct market substitute?
Another concern that is often overlooked is the environmental footprint of AI systems. Modern LLMs require immense computational power, which translates into significant energy use. According to a 2024 article by Goldman Sachs, a single ChatGPT query consumes roughly ten times as much energy as a Google search. Furthermore, data center power demands are projected to grow by 160% by 2030, largely due to AI and similar high-performance computing applications.
In Genesis 1:28, God entrusted Adam with the responsibility of stewarding the earth: “God blessed them and said to them, ‘Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground.’” As Christians, we understand this as a calling to care for creation. That doesn’t mean every environmental concern requires us to immediately stop what we’re doing, but it does mean we have a responsibility to discern whether our actions honor God’s command to wisely manage the world He has placed under our care.
With that in mind, I don’t believe the current environmental impact of AI demands that we halt its development altogether. However, it is something we should pay attention to and take seriously. There is encouraging work already being done to improve the efficiency of AI systems. Even so, while the underlying algorithms have become more efficient, the sheer scale of AI use continues to grow rapidly—and with it, overall energy consumption. As believers who value both truth and stewardship, we should remain aware, informed, and thoughtful about how AI shapes the world God has asked us to care for.
Other environmental impacts include water usage for cooling systems and carbon emissions generated by both the development and operation of these large-scale systems. These concerns highlight the need for sustainable AI practices and increased awareness of the hidden costs associated with advanced technologies. As energy demands rise, we may also see indirect impacts, such as increased energy prices, that affect society more broadly.
One of the most significant ethical risks posed by AI is its ability to generate realistic misinformation. AI-generated text, images, and videos can be highly convincing, making it increasingly difficult to distinguish between real and fabricated content. A notable early example occurred on March 23, 2023, when a video of Will Smith eating spaghetti circulated online. Although the video contained clear errors, it immediately demonstrated the potential of AI to produce misleading media.
Since then, AI capabilities have advanced rapidly. Today, systems can generate text, audio, and video that appear extremely realistic. This ability to create convincing but false content has profound implications for public discourse, journalism, and individual decision-making. AI-generated misinformation has the potential to spread quickly and influence opinions, making media literacy, critical thinking, and ethical oversight essential components of AI deployment.
As with inaccuracies and biases, the risk of misinformation underscores the importance of approaching AI-generated content with discernment. Users must verify claims, cross-reference sources, and maintain a principled framework—rooted in truth and ethical reasoning—when interacting with AI outputs.
While immediate issues like inaccuracies, bias, and misinformation deserve attention, I also believe there are longer-term existential concerns associated with AI that warrant careful reflection. Chief among these is dehumanization.
One of the greatest risks we face is the gradual reduction of humanity to mere data points. In Genesis 1:26-27, God creates mankind in His own image. The context suggests that this image refers not only to God’s nature but also to humanity’s role as God’s representatives to creation. Humanity was made to serve as God’s ambassadors, exercising stewardship and care over the world. The New Testament builds on this, with James linking our role as God’s representatives to the imperative to treat others with dignity (James 3:9). Throughout Scripture, a recurring theme is that human beings are called to reflect God’s character in how they treat one another.
AI presents several risks in this regard. At the extreme, AI could be used to make life-and-death decisions without reference to human dignity. For example:
While these scenarios illustrate obvious dangers, they are themselves obvious and if they happen I expect that a great many of us will take a stand to declare these as ethical problems. I believe the more subtle and insidious form of dehumanization lies in how AI reshapes our daily relationships and interactions and that this form of dehumanization is a present danger that deserves our attention now.
The theology of the Trinity has significant implications not only for our understanding of God (theology proper) but also for anthropology—the study of humanity in God’s creation. God’s very nature is relational, and if we are made in God’s image, relationship is foundational to our existence. AI, however, increasingly mediates our relationships, often substituting efficiency for meaningful human interaction.
For example, five years ago, before releasing a document like this, I would have asked my wife to proofread it. We would have discussed challenging wording, collaborated, and developed a better version together. Today, I might instead ask AI to proofread or suggest edits. While faster, this approach trades relational engagement for efficiency. The act of collaborating, learning, and growing with another person is gradually replaced by interactions with a machine.
This phenomenon extends beyond menial tasks. Increasingly, people engage in lengthy conversations with AI chatbots, treating them almost like therapists. Social media and AI systems curate content to keep users engaged, feeding information that aligns with prior behavior and preferences. This can amplify tendencies toward isolation, polarization, and obsession with digital media. Political biases are reinforced: conservatives may be exposed to extreme right-wing content, while liberals are fed progressive material, each increasingly disconnected from broader perspectives. Men are particularly susceptible to AI-driven reinforcement of sexualized content, which encourages further device engagement and isolates them from real-world interactions.
Dehumanization is therefore not just about AI “taking over” intelligence or decision-making; it is about how AI changes human behavior and relationships. The concern is not the individual using ChatGPT to reword a paragraph; it is the person spending hours scrolling curated content, unknowingly discipled and manipulated by algorithms that have no regard for spiritual, emotional, or moral well-being. Rather than fearing AI as an autonomous superintelligence, we should focus on the real, present danger of AI being misused by bad actors to manipulate people and erode the relational and ethical fabric of human life.
My sabbatical learning week reinforced several key insights. AI is an extraordinarily powerful tool that should be approached with care, discernment, and ethical oversight. Extreme fears of AI are often exaggerated, yet legitimate concerns—including inaccuracies, bias, environmental impact, copyright issues, misinformation, and dehumanization—require thoughtful and proactive engagement.
Moving forward, it is essential to approach AI with both technical understanding and ethical reflection. Our engagement with these systems should integrate discernment with a commitment to truth, justice, relational flourishing, and faithful stewardship. AI can serve humanity well, but only if we remain vigilant about the ways it shapes not only what we do, but how we relate to one another as beings made in the image of God.
What follows is a bibliography of the resources I read, listened to, or watched over the week of learning and then some summaries of my impressions of the resource.
This video series provides an excellent and visually intuitive introduction to neural networks and the core mathematical ideas behind modern AI systems. Grant Sanderson (3Blue1Brown) uses engaging visuals to explain concepts such as gradient descent, backpropagation, and ultimately the foundations that lead to transformer-based models—the key architecture behind today’s Large Language Models (LLMs) such as ChatGPT. While the playlist assumes some familiarity with Calculus III and Linear Algebra, the explanations remain approachable, and even viewers without the basic mathematical background one might get in an engineering degree can still gain meaningful insight from the presentations. For anyone wanting to understand the fundamental concepts that make LLMs possible, this playlist is highly recommended.
This roundtable discussion offers valuable insight into both the benefits and the risks of incorporating artificial intelligence into ministry contexts. The presenters—Bill Hendricks, Drew Dickens, John C. Dyer, and Kasey Olander—combine practical ministry experience with theological reflection and technological awareness. Dyer and Dickens, in particular, contribute expertise in digital tools and AI, helping viewers understand not only how AI functions but how it intersects with discipleship, pastoral care, and church practices.
A key strength of this video is its measured and biblically grounded approach. The presenters avoid both alarmist rhetoric and uncritical enthusiasm. Instead, they model discernment, emphasizing wisdom, stewardship, and faithfulness to Scripture as the guiding principles for evaluating new technologies. The discussion highlights opportunities for ministry enhancement—such as improved communication, administrative support, and accessibility—while also cautioning against ethical pitfalls, depersonalization, and overreliance on automation.
This resource is highly recommended for pastors, church leaders, and Christian educators seeking a trusted, balanced, and thoughtful perspective on how to use AI in ways that honor Christ and serve the church.
Freitas approaches the topic of technological advancement not as a technical or ethics expert, but as a clinical psychologist. He examines the social and psychological impacts of rapid technological change, especially the way individuals are developing digital identities that increasingly overshadow or distort their real-world identities. Freitas’ central argument is that the integration of technology into daily life is eroding essential elements of authentic human identity, interpersonal connection, and emotional well-being.
While Freitas raises valid concerns regarding mental health and the fragmentation of self, his analysis is hindered by a consistently negative and dismissive tone toward religion. He frequently portrays religious belief as irrational or outdated, which limits his credibility among readers seeking a fair and interdisciplinary discussion. Although the book may prompt reflection on the psychological effects of technology, its condescending posture and lack of balanced engagement with spiritual or ethical perspectives significantly weaken its usefulness. I would not recommend this book.
In 2084: Artificial Intelligence and the Future of Humanity, John Lennox offers a thoughtful and accessible exploration of the challenges and ethical questions surrounding emerging technologies, particularly artificial intelligence and transhumanism. Lennox, a mathematician and Christian apologist, writes with both intellectual rigor and pastoral sensitivity, making complex scientific and philosophical ideas understandable to a broad audience. He provides a balanced examination of AI—recognizing its tremendous potential while also warning of the moral and theological pitfalls that can arise when humanity seeks to redefine or surpass its God-given limits.
Lennox’s background in science allows him to engage the technology on its own terms, while his theological insights keep the conversation grounded in a biblical worldview. He carefully considers the implications of AI systems, including those similar to ChatGPT, and reflects on how they intersect with questions of human identity, free will, and the image of God (imago Dei). Lennox captures several of the nuances of systems like ChatGPT in his own nuanced ways he discusses the technology, this nuanced understanding of the technology was not something I found in many if any other books.
It is worth noting that there are two editions of 2084; while the first is valuable, the second edition significantly refines and expands Lennox’s discussion to address more recent developments in AI. Overall, this book is highly recommended for Christians seeking a clear, well-reasoned, and biblically anchored framework for thinking about the future of artificial intelligence and humanity’s place within it.
This book provides a thorough examination of current developments in bioengineering, exploring both the genetic and prosthetic frontiers. It also includes a very short commentary on the development of artificial intelligence and its potential implications at the end. The work is grounded in a strong theological framework while offering enough scientific detail to establish the author’s credibility in the field. Shatzer provides an accessible explanation of complex topics, including genetic engineering techniques such as CRISPR, making the book interesting for those with a technical background while at the same time using simple explanations that make the book approachable for readers without specialized expertise.
A key strength of the book is its ethical focus. The author emphasizes the concept of Imago Dei (the image of God) and argues that ethical considerations must be central to discussions about bioengineering. Shatzer also warns against alarmist or “knee-jerk” reactions from Christians who lack subject-matter expertise, noting that such responses can marginalize the Christian voice in scientific and ethical conversations. By presenting a measured, informed perspective, the book encourages Christians to engage responsibly and thoughtfully with emerging technologies.
Overall, this book is highly recommended for anyone seeking a balanced and insightful perspective on transhumanism, combining theological reflection with scientific and philosophical understanding.
Thacker argues that digital technology is discipling us—shaping our habits, beliefs, moral intuitions, and even our sense of identity—whether we realize it or not. He offers a thoughtful and balanced Christian perspective on the rise of misinformation, the influence of social media, and the challenges of digital privacy. Thacker accurately describes the polarization fueled by algorithm-driven echo chambers, showing how digital platforms reinforce biases and diminish our capacity for meaningful dialogue.
Grounded in biblical and theological reasoning, Thacker calls followers of Jesus to engage technology with wisdom, discernment, and intentionality. Rather than rejecting technology outright, he encourages believers to use it in ways that foster truth, virtue, neighbor-love, and a faithful witness to the Gospel. This book is pastorally sensitive, theologically sound, and highly relevant for Christians navigating the digital age. I highly recommend it.
In this introductory work on artificial intelligence, Thacker presents a Christian worldview framework for understanding both the opportunities and the risks of AI. He affirms that technological innovation can serve the common good, yet cautions that AI also raises ethical concerns related to dignity, privacy, surveillance, identity, and human value. Thacker reminds readers that humanity’s core calling is to bear the image of God, and therefore we must resist any technological trend that dehumanizes people, reduces them to data, or undermines moral agency and community.
The book’s strength lies in its theological reflection and accessibility for readers new to the topic. However, the work provides only a broad overview of AI and does not reflect technical depth in computer science or machine learning. Readers seeking a robust understanding of how AI systems function will find the explanation limited and the lack of technical understanding may lead a technical reader to question the authors expertise. Nevertheless, as a beginner-friendly introduction that helps Christians think ethically and biblically about AI, I recommend this book.
This edited volume provides valuable historical context on the rise of artificial intelligence in both popular culture and public imagination, coupled with thoughtful Christian reflections on these developments. Multiple contributors explore how literature, film, and media have shaped society’s expectations and fears surrounding AI, often presenting AI either as humanity’s salvation or as a threat to human existence. The book’s strongest sections examine the underlying messages about human identity embedded within cultural portrayals of AI and raise important theological questions about what it truly means to be human in an increasingly technological world.
A noteworthy contribution of the book is its critique of the philosophical assumptions that guide much of modern AI discourse—particularly the influence of mind–body dualism. For example, the Turing Test (first proposed by Alan Turing) is grounded in the notion that consciousness or intellect can exist independently from the body. The book challenges readers to consider how this assumption interacts with Christian theological perspectives, such as dichotomy, trichotomy, or embodied “integrated” understandings of human nature. The question of whether intelligence or personhood can be divorced from embodiment is central to both theology and AI ethics, and this volume helpfully draws attention to that tension.
However, while the book offers meaningful cultural and theological commentary, it contributes less substantively to the practical ethical discussion surrounding AI. Its focus leans more toward cultural critique than engagement with current technological realities, policy concerns, or concrete ethical frameworks for Christians navigating the age of AI. Readers seeking guidance on present-day ethical issues—such as algorithmic bias, surveillance, or AI in ministry and daily life—may find the book limited.
Overall, the volume serves as a valuable introductory reflection on the cultural and theological narratives shaping Christian thought about AI and raises worthwhile questions about embodiment and human identity. It is a helpful read for those interested in the intersection of theology and culture, though less so for those seeking detailed ethical analysis or practical direction regarding contemporary AI challenges.
This textbook provides a highly technical and comprehensive explanation of modern artificial intelligence systems, with a focus on Natural Language Processing (NLP), neural networks, and Large Language Models (LLMs). Xiao and Zhu detail the mathematical foundations, architectural designs, and training methodologies underlying state-of-the-art models, including the types of systems that power tools such as ChatGPT.
The book assumes a strong academic background in advanced mathematics and computer science—particularly Calculus III, Linear Algebra, Probability Theory, and algorithmic theory. For those seeking a rigorous and in-depth understanding of LLMs and the evolution of NLP, this is an essential resource. However, it is not suitable for casual readers or those without significant technical preparation. For graduate students, researchers, or advanced practitioners, it serves as a valuable and authoritative guide.