Family Policy Matters Radio Posts

  "Family Policy Matters" Radio   Education | Marriage & Parenting | Radio | Religious Freedom

Artificial Intelligence: For Good Or Evil

Jason Thacker, Creative Director at the Ethics and Religious Liberty Commission, discusses Artificial Intelligence, or AI. Thacker explains the benefits and dangers stemming from the use of AI, and where the Church can enter into this discussion.

Jason Thacker discusses artificial intelligence


Family Policy Matters
Transcript: Artificial Intelligence: For Good Or Evil

TRACI GRIGGS: Thank you for joining us today for Family Policy Matters. I’m Traci DeVette Griggs, Director of Communications, sitting in this week for John Rustin. Many of us may still consider artificial intelligence or AI as daydreaming or part of a sci-fi novel or movie. But those who spend their days studying AI—as many in our local universities are doing—understand that it’s coming much quicker than most people realize. 

As a response to that, the Ethics and Religious Liberty Commission, or ERLC, of the Southern Baptist Convention recently issued a statement on artificial intelligence, which was signed by a long list of evangelical leaders. Jason Thacker with the ERLC was one of those signers and he’s with us today to talk about why that statement was necessary. 

Jason serves as creative director and associate research fellow at ERLC, where he oversees all creative projects while writing and speaking on human dignity, technology and artificial intelligence. He has a new book on AI and human dignity coming out in 2020. Jason Thacker, welcome to Family Policy Matters.

JASON THACKER: Traci, thank you so much for having me.

TRACI GRIGGS: So why don’t you begin by defining artificial intelligence? What is it and how prevalent is it in our lives today?

JASON THACKER: I think that’s a really important question because when we talk about artificial intelligence in our culture, there’s often kind of some hype and misunderstanding about what it really is. Essentially, AI is non-biological intelligence. What we mean by that is, a machine that can exhibit certain intelligent features. So, being able to process or think or learn in certain aspects. A lot of those are loaded words, but essentially it’s a machine that can process information and data and produces a result. And there are kind of two forms of artificial intelligence. The one that’s in the sci-fi novels and movies and kind of popular culture is actually artificial “general” intelligence, which is a human-like intelligence that we don’t currently have and we don’t know if it’s even possible. But then, there’s another form of AI, which is popularly used, and we can interact with it every day. That’s a “narrow” AI, or just artificial intelligence, and that’s a machine that can perform a certain task. So an example of that would be on my iPhone I have Siri and I can say, “Hey Siri,” and tell her to do something specific and she is able to do that. But you notice, sometimes it’s really rigid. It’s not able to do everything I ask it to do, just a very narrow task.

TRACI GRIGGS: Tell us about this recent statement issued by ERLC. Why did you all feel the need to issue the statement? Were you seeing a lack of understanding about AI, perhaps some fear related to it among Christians?

JASON THACKER: Yeah, I think a lot of those are part of the reasons that we developed this statement. Back last April, Google actually dropped a project called “Project Maven” with the Department of Defense. And what this simple AI was, is that it was able to process through video data that’s been captured by drones that our military uses— and has been using for the last 10 or 12 years—pretty predominantly. And what was happening is that the employees were kind of voicing their concerns and kind of outrage that Google would even work on a project like this that could be used to strengthen targeting systems, and surveillance systems overseas. So after that kind of debacle happened, what you saw was a number of tech companies putting out ethical statements. So Google put one up on the Google AI blog and said: These are our guiding principles of how we think about and use AI. Because AI isn’t something that is 5 or 10 years or 50 years down the road. It’s something we’re already using. Our military uses AI often. Now we’re starting to see this in medicine. We are starting to see this at work in automation. And so AI is not far off, it’s something that’s we’re already working through, and our culture is asking a lot of questions. What does it mean to be ethical? How do we use this in a good way versus a lot of the abuses that we see of AI? Like China who are using facial recognition technology that’s driven by AI to really oppress certain people groups or dissidents within their country. And so what we wanted to do was put forth kind of an evangelical statement or guidelines or ethical framework on how we feel that the church should approach these types of issues, because they are very pressing, they are very intense. And we believe as Christians that we have the Hope of the world, that we have the Truth that God has given us on how to use these tools well. And so that’s what we’re doing by putting together the statement: to address some of the misunderstandings, to address some of the hype, but also to proactively engage this subject as the Church rather than responding to it after its impacts are widely felt.

TRACI GRIGGS: It’s a really interesting statement. In the preamble it seems like you are tackling how do we as Christians, how are we going to think about AI and how are we going to think about technology in general? In another place you say that the use of AI is not “morally neutral.” So help us out here. How are we as Christians to view AI and technology in your opinion?

JASON THACKER: I think the best way for Christians to view this is, as any other tool that we’ve ever created or used as humanity. Technology isn’t new. Often we think of technology as our phones and our computers, and these types of systems that we’ve had for the last 10, 20, 30 years. But we’ve always used technologies in humanity. So that’s what we wanted to do in this statement, is to go back to the very beginning of how God created us. He created us in his image. He gave us work to do. He gave us these creative abilities to create certain pieces of technology, like a shovel or a hoe, or even going to Johannes Gutenberg and the printing press. These are tools that God has given us. What we believe through Matthew 22, where Jesus is being pressed on what is the greatest commandment, he says: You’re to love the Lord your God with all your heart, mind, soul, and strength, and to love your neighbors yourself. And so we’re thinking through a grid as how do we use technology as a tool, not as a god, not as a substitute, but as a tool to love God and to love our neighbor. And so in the preamble, and throughout the statement, we’re trying to frame this because, often in culture, AI can easily become—because it’s so powerful, we put way too much trust and authority or we start thinking of it as something more than just a tool.

TRACI GRIGGS: That’s a really good point. I love the point about the printing press because, you know, as Christians who love our Bibles, we know what that did as far as being able to quickly print Bibles and get them to all edges of the world. So that’s a really good example. 

Let’s talk about the good and the bad because the statement that you guys issued certainly does that. It is in many places, very hopeful. So it contains phrases like: “A demonstration of the unique creative abilities of human beings,” and “innovation for the glory of God.” And then there’s another that says: “Human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering.” So talk about that, you’ve mentioned it before, but more specifically, what are some ways that you are hopeful about what technology can do as a tool for Christians?

JASON THACKER: Technology, and specifically artificial intelligence, can be used by humanity and by society to make our work more efficient, to make things quicker, to be able to offload very simple tasks that would let us focus on more mentally intensive tasks, or physically intensive tasks. And so just a real practical day-to-day example of how AI is already being used: My father, just last year, had to have an amputation and it was due to medical complications. He lost a leg. He’s able to—now granted, the technology is still so futuristic in some sense that it’s unaffordable for us as a family—but you can already start to see prosthetic limbs that are controlled by our thoughts. And while that might sound robotic and kind of Star Trek-ish, you already see this technology being used to really help people who have lost abilities to use certain body parts, where you’re able to use the machine driven by artificial intelligence and regain kind of a normal life for him. Now granted, he doesn’t have that technology right now, but it’s available and it’s already being used to treat some of the most intense diseases and disabilities that we have in a broken world.

TRACI GRIGGS: How about some of the applications for the Church, say doing evangelism, or reaching out to people in poverty?

JASON THACKER: There’s a growing group of Christian who are looking to harness the power of artificial intelligence to share the Gospel, to fulfill the Great Commission that Christ has given us. How they’re doing that is actually using artificial intelligence in the form of chatbots, meaning, something that you can kind of log into, somebody can ask questions, and it’s responding in real time and kind of helping guide a spiritual conversation. There’s good and bad in that. Good is that the Gospel is going forth and that we’re sharing that message of Hope. But then also if that becomes a substitute for the Church, for authentic community as the body of Christ with one another in a physical location, we don’t want to see that. So this can aid us in helping to share the Gospel and getting the Gospel into places that are difficult, or next to impossible, to be able to share this message. But we don’t want to leave it at that. But you also see other Christians who are faithfully using and developing these technologies that are outside of maybe a specific ministry, or evangelism context, who are doctors and lawyers and economists and professors, who are starting to use this technology for the greater good, for human flourishing. Seeing that every single human being is made in the image of God and has ultimate dignity and worth, they’re harnessing this technology to do medical research, to find maybe cures for diseases that have plagued us, where AI can step in and just process more information and more data than a human being could, in many ways. So these AI systems are able to outperform us in very narrow tasks, but generally humanity is obviously still more advanced than anything we have in terms of AI.

TRACI GRIGGS: Okay. You’ve mentioned this, but after some of your very hopeful statements, you go into a long list of possible abuses of AI, and there are some pretty scary prospects when you consider that. What are some of the potential abuses that we all as Believers, as Christians, need to be on the lookout for?

JASON THACKER: I think one of the most pressing right now is the way that authoritarian states, or rogue groups—and when I say authoritarian states, I’m specifically thinking of China right now and some of the human rights abuses that they’re committing as a nation against people of faith. There are stories coming out that China is actually using facial recognition technology as a way to track its citizens. And the way it’s doing that is not just in a security kind of means, which is what the government promotes, but they’re also using it to oppress dissidents or people who don’t agree with the Communist Party. And specifically, that’s happening with the Uigher Muslims. It’s a certain people group—a faith group—within the mountains of China, who are actually being taken into custody using artificial intelligence with this facial recognition technology to be tracked. Every move they make, they’re being detained and then they’re sent off to re-education camps to swear loyalty to the Chinese president and the Chinese government, to renounce their faith, to be trained in what it means to be a good Chinese person. And so you see that AI can be used in ways that kind of prop up or strengthened more authoritarian, centralized governments. But we’re also seeing a facial recognition that’s driven by AI debated here in the United States, where you have San Francisco, just last week, who issued a city ban on the artificial intelligence being used in surveillance. They see this kind of tension between privacy and security, and they’re leaning more on the privacy angle.

TRACI GRIGGS: Well we’re just about out of time. Jason, where can our listeners go if they want to read more, either about the ERLC statement on artificial intelligence, or about this topic of AI in general?

JASON THACKER: You can go to ERLC.com/AI, and that is where we list our statement. We also have a number of resources on our website that you can find under our technology topic.

TRACI GRIGGS: Great. Jason, thank you so much for being with us on Family Policy Matters today, for keeping up with our world’s rapidly changing technologies, and helping us to keep our focus on the fundamental important truths of life.

– END –

SHARE THIS ON FACEBOOKSHARE THIS ON TWITTER

Receive Our Legislative Alerts