Here’s What Happened When I Tried Google’s Unbelievable New AI

Oct 2, 2024 | News

By: Akos Balogh

‘Hey, come and check this out!’ I yell out to my teenage daughter.

I had just generated a 10-minute podcast episode (yes, an entire podcast episode) using Google’s new Notebook LM website. I uploaded a PDF of my latest blog post, pressed a button (no prompting required), and out came a 10-minute audio file with the voice of a man and woman discussing my article in a ‘deep dive’ format.

‘Turn it off!’ says my daughter. ‘It’s too weird!’.

My Gen Z digital native daughter doesn’t want a bar of it. I’m a little surprised, although I shouldn’t be. When I first tested it a day or so ago, I had a similar reaction:  It was weird. It was strange. It was discombobulating.

An AI speaking in a human voice, conversing with another AI? Up until a day ago, that was science fiction. And yet here we are.  

Disturbing

I’ve since shown this feature to other people, and the overwhelming response was a variation of outright rejection, to a milder  ‘this is wild,  in a disturbing sort of way.’

If you have yet to hear it, you can check it out below (for best results, skim my blog post on which the podcast is based to get a sense of how scarily accurate it is).

So what do we make of all this?

Rapid Acceleration

AI technology is accelerating quicker than most realise

I recently heard AI Developer and researcher Dario Amodei (the CEO of AI company Anthropic) argue that AI  is accelerating exponentially. And many AI observers agree. What’s considered science fiction today is something you’ll be able to do tomorrow.

So let’s buckle up, we’re in for a wild ride.

Amodei believes we will see major scientific breakthroughs in the next two years thanks to advancing AI technology: think ‘cure for cancer’ level breakthroughs. Time will tell if he’s right.

Will Robots Take Over?

Is this the end of human podcasting? I don’t think so.

New technologies are disruptive.  And some podcasters are getting a little nervous after seeing this technology in action:

It’s not irrational to be concerned about how AI will affect human podcasting. According to the discipline of Media Ecology, new technology is not additive; it’s transformative. When the iPod came in, it didn’t just sit alongside CDs and cassette tapes: it transformed the whole music industry, making CDs and cassette tapes redundant.

And in a few years, we’ll be looking back at this tech and thinking, ‘This was when it all began’ – like we look back at the launch of the iPod.

But will this mean the end of human podcasting?  I really don’t believe it will.

If anything, I wonder if human podcasts – and human-generated content – will only grow in value as AI-generated content floods the internet. It’s like how people yearn for live music experiences, even as they can listen to on-demand high-quality music via their earphones.

Like any technology, AI technology shapes us according to its built-in ‘values’. Technology is never ‘value-neutral’.

Instead, it’s designed according to specific values, which shape us, whether we’re using it for good or evil purposes. Take smartphones: you can use them for good by facetiming loved ones. Or you can run a drug cartel with them.

But smartphones do more than that. They shape your focus and attention span: they’re designed according to the value of ‘constant, instant connectivity’, so you feel a grave sense of FOMO if you don’t have your smartphone within your arm’s reach. And you feel compelled to constantly check it – to be connected – to ensure you don’t miss anything.

Before you know it, you’ve developed a habit of pulling it out when there’s a dull moment. And prolonged use will reduce your ability to concentrate on less engaging content (e.g. a book or a long movie). [1]

How Will AI Shape Us?

So, how will AI shape us? What’s the fundamental value of AI?

AI’s key value is to do our thinking for us, whether choosing what content we see on social media, generating content based on a prompt, or in the case of  Google’s Notebook LM, generating a podcast at the press of a button. Which is fantastic on the one hand, as AI is powerful. It will help solve problems and create things that we can’t even begin to imagine (fusion energy, cures for diseases, maybe even star travel?).

But if we outsource our thinking to AI, what will that do to us?

Well, ask any teacher. If students get AI to do their assignments, they don’t learn. They don’t grow. They don’t grow in wisdom. And that’s the danger we face if AI does all our thinking for us.

A better way ahead: “Disciplined Discernment”.

As I argued in my AI talk at Moore College earlier this year, Christians need to cultivate the value of ‘disciplined discernment’: carefully discerning and making the most of the benefits of technology, while mitigating the negatives.

Disciplined discernment means having a ‘testing’ mindset as we use new technology.

When it comes to Google’s podcast episode generation, I’ll be testing it by generating episodes that summarise content I need to read (e.g. articles, even books), to give me an overview. I love listening to content, so going for a walk while getting a summary of a book or article could be helpful. I would then go back and read the article/book for myself, and go from there.

But I won’t (and don’t) use AI to write my blog posts.

Why not? Because writing is thinking, where I grapple with ideas, trying to make sense of the world. Writing is how I process the world from a Christian perspective. I’ve learned and grown immensely from writing – and had I outsourced all or part of this to AI, I would not have the knowledge I have today.

So as you come across new technology, don’t be afraid to try it out. Test it. Find out the positives, while becoming aware of the negatives.

Because we ain’t seen nothing yet.


Article supplied with thanks to Akos Balogh.

About the Author: Akos is the Executive Director of the Gospel Coalition Australia. He has a Masters in Theology and is a trained Combat and Aerospace Engineer.

Feature image: Photo by Possessed Photography on Unsplash